![]() METHODS AND EQUIPMENT USING FEC CODES WITH PERMANENT DISABLING OF SYMBOLS FOR CODING AND DECODING PR
专利摘要:
methods and equipment employing fec codes with permanent deactivation of symbols for encoding and decoding processes. encoding a plurality of encoded symbols is provided where an encoded symbol is generated from a combination of a first symbol generated from a first set of intermediate symbols and a second symbol generated from a second set of intermediate symbols, each set having at least one different encoding parameter, where the intermediate symbols are generated based on the source symbol set. a data decoding method is also provided, where a set of intermediate symbols is decoded from a set of received coded symbols, the intermediate symbols arranged in a first and a second set of symbols for decoding, where the intermediate symbols in the second set are permanently disabled for the purposes of programming the decoding process to retrieve the intermediate symbols from the source symbols, where at least some of the source symbols are retrieved from the decoded set of intermediate symbols. 公开号:BR112012003688B1 申请号:R112012003688-2 申请日:2010-08-19 公开日:2021-03-23 发明作者:Michael G. Luby;Mohammad Amin Shokrollahi;Lorenz Minder 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
[0001] This application is partly a continuation of US patent application No. 12 / 604,773, filed on October 23, 2009, by M. Amin Shokrollahi, et al., And entitled "Method and Apparatus Employing FEC Codes with Permanent Inactivation of Symbols for Encoding and Decoding Processes "and additionally claims priority for the following provisional requests, each by M. Amin Shokrollahi, et al. and each entitled "Method and Apparatus Employing FEC Codes with Permanent Inactivation of Symbols for Encoding and Decoding Processes": US Provisional Patent Application No. 61 / 353,910, filed June 11, 2010, US Provisional Patent Application No. 61 / 257,146, filed on November 2, 2009 and US provisional patent application No. 61 / 235,285, filed on August 19, 2009. Each provisional and non-provisional application cited above is already incorporated by reference for all purposes. [0002] 1) Patente U.S. No. 6.307.487, expedida para Michael G. Luby intitulada "Information Additive Code Generator and Decoder for Communication Systems," (doravante "Luby I"); 2) Patente U.S. No. 6.320.520 expedida para Michael G. Luby, intitulada "Information Additive Group Code Generator and Decoder for Communication Systems," (doravante "Luby II") ; 3) Patente U.S. No. 7.068.729, expedida para M Amin Shokrollahi intitulada "Multi-Stage Code Generator and Decoder for Communication Systems", (doravante "Shokrollahi I"); 4) Patente U.S. No. 6.856.263 expedida para M. Amin Shokrollahi intitulada "Systems and Processes for Decoding a Chain Reaction Code Through Inactivation" (doravante "Shokrollahi II"); 5) Patente U.S. No. 6.909.383, expedida para M. Amin Shokrollahi intitulada "Systematic Encoding and Decoding of Chain Reaction Codes" (doravante "Shokrollahi III"); 6) Publicação de patente U.S. No. 2006/0280254 nomeando Michael G. Luby e M. Amin Shokrollahi e intitulada "In-Place Transformations with Applications to Encoding and Decoding Various Classes of Codes" (doravante "Luby III"); 7) Publicação de patente U.S. No. 2007/0195894 nomeando M. Amin Shokrollahi e intitulada "Multiple-Field-Based Code Generator and Decoder for Communications Systems" (doravante "Shokrollahi IV"). The following references are hereby incorporated by reference in their entirety for all purposes: 1) US Patent No. 6,307,487, issued to Michael G. Luby entitled "Information Additive Code Generator and Decoder for Communication Systems," (hereinafter "Luby I"); 2) US Patent No. 6,320,520 issued to Michael G. Luby, entitled "Information Additive Group Code Generator and Decoder for Communication Systems," (hereinafter "Luby II"); 3) US Patent No. 7,068,729, issued to M Amin Shokrollahi entitled "Multi-Stage Code Generator and Decoder for Communication Systems", (hereinafter "Shokrollahi I"); 4) US Patent No. 6,856,263 issued to M. Amin Shokrollahi entitled "Systems and Processes for Decoding a Chain Reaction Code Through Inactivation" (hereinafter "Shokrollahi II"); 5) US Patent No. 6,909,383, issued to M. Amin Shokrollahi entitled "Systematic Encoding and Decoding of Chain Reaction Codes" (hereinafter "Shokrollahi III"); 6) US patent publication No. 2006/0280254 naming Michael G. Luby and M. Amin Shokrollahi and entitled "In-Place Transformations with Applications to Encoding and Decoding Various Classes of Codes" (hereinafter "Luby III"); 7) US patent publication No. 2007/0195894 naming M. Amin Shokrollahi and entitled "Multiple-Field-Based Code Generator and Decoder for Communications Systems" (hereinafter "Shokrollahi IV"). [0003] The present invention relates to the encoding and decoding of data in communication systems and more specifically to communication systems that encode and decode to compensate for errors and spaces in communicated data in an efficient manner. Fundamentals of the Invention [0004] Techniques for transmitting files between a sender and a recipient through a communications channel are the subject of much literature. Preferably, a recipient wishes to receive an exact copy of the data transmitted over a channel by a sender with some degree of certainty. Where the channel lacks perfect fidelity (which covers most of all physically feasible systems), a concern is how to deal with lost or truncated data in the transmission. Lost data (deletions) are often easier to deal with than corrupted data (errors) since the recipient may not always know when corrupted data is received in error. Many error correction codes have been developed to correct eliminations and / or errors. Typically, the particular code used is chosen based on the same information about channel infidelities through which the data is being transmitted and the nature of the data being transmitted. For example, where the channel is known to have long periods of infidelity, a burst error code may be more suitable for that application. Where only short, infrequent errors are expected, a simple parity code may be better. [0005] As used here, "source data" refers to data that is available from one or more senders and that a receiver used to obtain, by retrieving a sequence transmitted with. or without errors and / or eliminations, etc. As used here, "encrypted data" refers to data that is ported and can be used to retrieve or retrieve the source data. On a. simple case, the encrypted data is a copy of the source data, but if the received encrypted data differs (due to errors and / or deletions) from the transmitted encrypted data, in that simple case the source data may not be fully recoverable without additional data on the data. source data. Transmission can take place through space or time. In a more complex case, the encoded data is generated based on the source data in a transformation and is transmitted from one or more senders to the receivers. The encoding is considered "systematic" if the source data is considered part of the encoded data. In a simple example of systematic coding, redundant information about the source data is appended to the end of the source data to form the encoded data. [0006] In addition, as used here, "input data" refers to data that is present in an input of an FEC encoding equipment (forward error correction) or an FEC encoding module, component, step, etc. ("FEC encoder") and "output data" refer to the data that is present in an FEC encoder output. Correspondingly, the output data must be present in an input of an FEC decoder and the FEC decoder must send the input data, or a correspondence thereof, based on the output data it has processed. In some cases, the input data is, or includes, source data, and in some cases, the output data is, or includes, encoded data. In other cases, a sending device or sending program code may comprise more than one FEC encoder, that is, source data is transformed into data encoded in a series of a plurality of FEC encoders. Similarly in the receiver, there may be more than one FEC decoder applied to generate source data from the received coded data. [0007] The data can be considered as divided into symbols. An encoder is a computer system, device, electronic circuit, or similar, that generates encoded symbols or sends symbols from a sequence of source symbols or input symbols, and a decoder is the counterparty that retrieves a sequence of source symbols or symbols input from the received or retrieved encoded symbols or output symbols. The encoder and decoder are separated in time and / or space by the channel and any received encoded symbols may not be exactly the same as the corresponding transmitted encoded symbols and may not be received in exactly the same sequence as they were transmitted. The "size" of a symbol can be measured in bits, whether or not the symbol is actually divided into a sequence of bits, where a symbol has a size of M bits when the symbol is selected from an alphabet of 2m symbols. In many of the examples presented here, symbols are measured in bytes and codes can be over a field of 256 possibilities (there are 256 possible 8-bit standards), but it must be understood that different units of measurement of data can be used and it is data measurement in several ways is well known. [0008] Luby I describes the use of codes, such as chain reaction codes, to solve error correction efficiently in terms of computation, memory and bandwidth. A property of the encoded symbols produced by a chain reaction encoder is that a receiver is able to retrieve the original file as soon as sufficient encoded symbols have been received. Specifically, in order to retrieve the original K source symbols with a high probability, the receiver needs approximately K + A encoded symbols. [0009] The "absolute reception overhead" for a given situation is represented by the value A, while a "relative reception overhead" can be calculated as the ratio of A / K. The absolute reception overhead is a measure of how much extra data needs to be received in addition to the minimum theoretical data amount of information, and can depend on the reliability of the decode and can vary as a function of the number, K, of source symbols. Similarly, the relative reception overhead, A / K, is a measure of how much additional data needs to be received in addition to the theoretical minimum amount of information regarding the size of the source data being retrieved and may also depend on the reliability of the decoder. and can vary as a function of the K number of source symbols. [0010] Chair reaction codes are extremely useful for communication over a packet-based network. However, they can be quite computationally intensive at times. A decoder may be able to decode more often and more easily if the source symbols are encoded using a static encoder before a dynamic encoder that encodes using a chain reaction and another code without a fee. Such decoders are illustrated in Shokrollahi I, for example. In the examples illustrated there, the source symbols are input symbols for a static encoder that produces output symbols that are input symbols for a dynamic encoder that produces output symbols that are encoded symbols, where the dynamic encoder is a free encoder that it can generate a number of output symbols in an amount that is not a fixed rate with respect to the number of input symbols. The static encoder can include more than one fixed rate encoder. For example, a static encoder can include a Hamming encoder, a low density parity check ("LDPC") encoder, a high density parity check ("HDPC") encoder, and / or the like. [0011] Chain reaction codes have a property that as some symbols are retrieved in the decoder from the received symbols, these symbols can be used to retrieve additional symbols, which, in turn, can be used to retrieve even more symbols. Preferably, the symbol chain reaction resolved in the decoder can continue so that all desired symbols are retrieved before a received symbol set is used. Preferably, the computational complexity of carrying out chain reaction encoding and decoding processes is low. [0012] A decoder retrieval process can involve determining which symbols have been received, creating a matrix that can map the original input symbols to the encoded symbols that have been received, then inverting the matrix and performing an inverted matrix matrix multiplication and a vector of received coded symbols. In a typical system, a brute force implementation can consume excessive computing effort and memory requirements. Obviously, for a particular set of encoded symbols received, it may be impossible to retrieve all of the original input symbols, but even where possible, it can be computationally very expensive to compute the result. [0013] Shokrollahi II describes an approach called "deactivation", where decoding takes place in two steps. In the first step, the decoder makes an inventory of the received coded symbols that it has available, as the matrix should look like and determines, at least approximately, a sequence of decoding steps that will allow the chain reaction process to be completed accordingly. with the encoded symbols received. In the second step, the decoder runs the chain reaction decoding according to the determined sequence of decoding steps. This can be done in a memory-efficient manner (that is, in a way that requires less memory storage for operation than a memory-inefficient process). [0014] In a deactivation approach, the first decoding step involves manipulating the matrix, or its equivalent, to determine some number of input symbols that can be resolved and when the determination stops, designating one of the input symbols as a "deactivated symbol "and continuing with the determination process considering that the deactivated symbol is actually solved, then in the end, solved the deactivated symbols using Gaussian elimination or some other method of inversion of a matrix that is much smaller than the original decoding matrix . Using this determination, the chain reaction sequence can be performed on the received coded symbols to arrive at the retrieved input symbols, which can either be all original input symbols or a suitable set of original input symbols. [0015] For some applications that impose strict decoder restrictions, such as where the decoder is on a lower power device with limited memory and computing power, or such as when there are stricter restrictions on the relative or absolute reception overhead allowed, improved methods can be indicated with respect to the deactivation approach described above. [0016] In addition, the methods for dividing a file or a large block of data into the fewest possible source blocks, subject to the restriction on the smallest sub-symbol size, and then subject to that division into the fewest possible sub-blocks subject to a restriction on the maximum sub-block size can be useful. Brief Summary of the Invention [0017] According to an embodiment of an encoder in accordance with aspects of the present invention, an encoder, in, within or to a sender that transmits an ordered set of source symbols from one or more senders to one or more receivers via a communications channel, where the encoder generates data to be sent that includes a plurality of encoded symbols generated from the source symbols. In a first step, intermediate symbols are generated from the source symbols using a method that is inverted, that is, there is also an inversion method for generating source symbols from intermediate symbols. In another step, the intermediate symbols are divided into a first set of intermediate symbols and a second set of intermediate symbols, where there is at least one intermediate symbol in the first set of intermediate symbols and there is at least one intermediate symbol in the second set of intermediate symbols and at least one encoded symbol is generated from at least one intermediate symbol from each of the two sets. In some variations, there are more than two sets. [0018] In some embodiments, the values for a first set and a second set of temporary symbols are generated, where the values of the first set of temporary symbols depend on the values of the first set of intermediate symbols and the values for the second set of temporary symbols depend on the values of the second intermediate symbol set. The values for the encoded symbols are generated from the first set and the second set of temporary symbols. [0019] In some variations, the number of encoded symbols that can be generated is independent of the number of source symbols. [0020] Decoder modes are also provided. According to a decoder embodiment according to the aspects of the present invention, a decoder, in, within or to a receiver, receives encoded symbols generated from intermediate symbols, where the intermediate symbols are generated from source symbols using a method that is inverted, that is, there is also an inversion method for generating source symbols from intermediate symbols, and where at least one of the intermediate symbols is designed as a permanently disabled symbol and where there is at least one other among intermediate symbols that are not among the permanently disabled symbols. The decoder decodes, from the received coded symbols, a set of intermediate symbols and the decoder takes into account at least one symbol permanently disabled, and generates source symbols from the decoded set of intermediate symbols using the inversion method. [0021] In decoding, decoding steps are programmed, leaving aside the programming of symbols permanently deactivated. The permanently disabled symbols can be solved using new and conventional methods and then used to solve other intermediate symbols. One approach to resolving permanently disabled symbols (and other currently disabled ones, if used) may be to apply Gaussian elimination to resolve disabled symbols. Some of the remaining intermediate symbols are recovered based on the values of the permanently recovered disabled symbols and received encoded symbols. [0022] In some variations of the decoding method, the permanently disabled symbols comprise the second set of intermediate symbols based on the encoding modalities. In some variations of the decoding method, the permanently disabled symbols comprise a subset of intermediate symbols where the corresponding encoding method is not a multistage chain reaction code. Such coding methods may include one or more of a Tornado code, a Reed-Solomon code, a reaction code in. chain (examples described in Luby I), or similar for the subset of intermediate symbols. [0023] Intermediate symbols are used for encoding and decoding, where the method for generating intermediate symbols from source symbols and the corresponding inverse method are indicated for a desired set of performance characteristics, such as decoding capability. In some embodiments, the intermediate symbols comprise source symbols. In some embodiments, the intermediate symbols comprise the source symbols, along with redundant symbols that are generated from the source symbols, where the redundant symbols can be chain reaction symbols, LDPC symbols, HDPC symbols or other types of redundant symbols. Alternatively, intermediate symbols can be based on prescribed relationships between symbols, for example, relationships between intermediate symbols and source symbols, and additional LDPC and HDPC relationships between intermediate symbols, where a method of de-coding is used to generate intermediate symbols from source symbols based on the prescribed relationships. [0024] The methods and systems can be implemented by electronic circuits or by a processing device that executes the programming instructions and has an appropriate instruction program code to implement coding and / or de-coding. [0025] Numerous benefits are achieved by means of the present invention. For example, in a specific modality, the computational cost of encoding data for transmission over a channel is reduced. In another specific modality, the computational cost of de-coding such data is reduced. In another specific modality, the absolute and relative reception overhead is substantially reduced. Depending on the modality, one or more benefits can be achieved. These and other benefits are provided in greater detail throughout the present specification and more particularly below. [0026] A further understanding of the nature and advantages of the inventions described here can be realized by reference to the remaining parts of the specification and the accompanying drawings. Brief Description of Drawings [0027] Figure 1 is a block diagram of a communications system that uses multi-stage coding that includes permanent deactivation, along with other features and elements; Figure 2 is a table of variables, sets and the like, which are used in several other figures; Figure 3 is a block diagram of a specific embodiment of the encoder shown in Figure 1; Figure 4 is a block diagram of the dynamic encoder in Figure 3 in more detail; Figure 5 is a flow chart illustrating a permanent deactivation (PI) coding process; Figure 6 is a flow chart illustrating a dynamic coding process; Figure 7 is a flow chart of a weight calculation operation for a symbol calculation; Figure 8 illustrates a table that can be stored in memory, usable to determine a degree of a symbol based on a query value; Figure 9 illustrates a matrix used in an encoding or decoding process; Figure 10 illustrates an equation representing parts of the matrix illustrated in figure 9, for a specific minimum polynomial; Figure 11 is a flow chart illustrating a process for configuring a set for use in encoding or decoding; Figure 12 illustrates a matrix representation of a set of equations to be solved by a decoder to retrieve a set, C (), representing source symbols retrieved from a set, D (), representing received coded symbols, using a submatrix IF representing R static symbols or equations known to the decoder. Figure 13 illustrates a matrix resulting from row / column exchanges of the matrix in Figure 12, using OTF deactivation; Figure 14 is a block diagram describing a process for generating the matrix in Figure 12; Figure 15 illustrates a matrix representation of a set of equations to be solved by a decoder to retrieve a set, C (), representing the source symbols retrieved from a set, D (), representing received coded symbols, using a SE submatrix and a submatrix corresponding to the permanently disabled symbols; Figure 16 is a flow chart illustrating a process for the generation of a LT submatrix as can be used in the matrix of figure 12 or matrix of figure 15; Figure 17 is a flow chart illustrating a process for generating a PI submatrix that can be used in the matrix of Figure 15; Figure 18 is a block diagram of a matrix generated; Figure 19 is a flow chart illustrating a process for generating an SE submatrix; Figure 20 is a flow chart illustrating a process for generating a PI submatrix; Figure 21 is a flow chart illustrating a process for solving the symbols recovered in a decoder; Figure 22 illustrates a matrix representation of a set of equations to be solved by a decoder to retrieve a set, C (), representing the source symbols retrieved from a set, D (), representing encoded symbols received after exchanges; Figure 23 illustrates a matrix representation of a set of equations to be solved by a decoder and corresponding to the matrix illustrated in figure 26; Figure 24 illustrates a matrix representation used as part of a decoding process; Figure 25 illustrates a matrix representation used as another part of a decoding process; Figure 26 illustrates a matrix representation of a set of equations to be solved by a decoder after the partial solution; Figure 27 is a flowchart illustrating another process for solving the symbols recovered in a decoder; Figure 28 illustrates a matrix representation of a set of equations to be solved by a decoder; Figure 29 illustrates a matrix representation of a set of equations to be solved by a decoder; Figure 30 illustrates an illustrative coding system that can be implemented as hardware modules, software modules, or parts of program code stored in a program store and executed by a processor, possibly as a collective unit of code not separated as illustrated in the figure; Figure 31 illustrates an illustrative decoding system that can be implemented as hardware modules, software modules or parts of the program code stored in a program store and executed by a processor, possibly as a collective unit of separate code as illustrated in the figure. [0028] Attached is Annex A, which is a code specification for a specific modality of an encoder / decoder system, an error correction scheme, and applications for the reliable distribution of data objects, sometimes with details of the present invention used. , which also includes a specification of how a systematic encoder / decoder can be used in transporting object distribution. It should be understood that the specific modalities described in Annex A are not limiting examples of the invention and that some aspects of the invention can use the teachings presented in Annex A while others do not. It should also be understood that limiting statements in Appendix A may be limiting with respect to the requirements of the specific modalities and such limiting statements may or may not belong to the claimed inventions and, therefore, the claim language need not be limited by such limiting statements. Detailed Description of the Specific Modalities [0029] Details of the implementation of parts of the encoders and decoders that are referred to here are provided by Luby I, Luby II, Shokrollahi I, Shokrollahi II, Shokrollahi III, Luby III and Shokrollahi IV and are not entirely repeated here for the sake of brevity. All descriptions thereof are hereby incorporated by reference for all purposes and it should be understood that the implementations presented here are not necessary for the present invention, and many other variations, modifications or alternatives may also be used, unless otherwise indicated. contrary. [0030] Multistage encoding, as described here, encodes the source data in a plurality of stages. Typically, but not always, a first stage adds a predetermined amount of redundancy to the source data. A second stage then uses a chain reaction code, or similar, to produce encoded symbols from the original source data and the redundant symbols computed by the first encoding stage. In a specific modality, the received data is first decoded using a chain reaction decoding process. If this process is not successful in recovering the original data completely, a second decoding step can be applied. [0031] Some of the modalities taught here can be applied to many other types of codes, for example, to codes as described in Request for Comments (RFC) from the Internet Engineering Task Force (IETF) 5170 (hereinafter "IETF LDPC codes"), and to the codes described in US Patent Nos. 6,073,250, 6,081,909 and 6,163,870 (hereinafter, "Tornado codes"), resulting in improvements in the reliability and / or performance of CPU and / or memory for these types of codes. [0032] An advantage of some modalities taught here is that less arithmetic operations are needed to produce encoded symbols, compared to chain reaction encoding alone. Another advantage of some specific modalities that include a first coding stage and a second coding stage is that the first coding stage and the second coding stage can be performed at separate times and / or by separate devices, thus dividing the computing load and minimizing the overall computing load as well as the memory size and access pattern requirements. In multi-stage encoding modes, redundant symbols are generated from the input file during the first encoding stage. In these modalities, in the second encoding stage, the encoded symbols are generated from the combination of input file and redundant symbols. In some of these modalities, the encoded symbols can be generated as needed. In the modalities in which the second stage comprises chain reaction coding, each coded symbol can be generated without considering how the other coded symbols are generated. Once generated, these encoded symbols can then be packaged and transmitted to their destination, with each packet containing one or more encoded symbols. Unpackaged transmission techniques can be used instead or in addition. [0033] As used here, the term "file" refers to any data that is stored in one or more sources and must be distributed as a unit to one or more destinations. Thus, a document, an image, and a file from a file server or computer storage device, are all examples of "files" that can be distributed. Files can be of known size (such as a megabyte image stored on a hard drive) or they can be of unknown size (such as a file taken from the output of a sequencing source). In any case, the file is a sequence of source symbols, where each source symbol has a position in the file and a value. A "file" can also be used to refer to a short part of a sequencing source, that is, the data stream can be divided into one-second intervals, and the source data block within that one-second interval. can be considered a "file". As another example, the data blocks of a video sequencing source can be further divided into multiple parts based on the priorities of that data defined, for example, by a video system that can reproduce the video sequence, and each part of each block can be considered a "file". As such, the term "file" is generally used and should not be used to limit it extensively. [0034] As used here, source symbols represent data that must be transmitted or ported, and encoded symbols represent data generated based on source symbols that are ported over a communications network, or stored, to allow reliable reception and / or regeneration of font symbols. Intermediate symbols represent symbols that are used or generated during an intermediate step of the encoding and decoding processes, where there is typically a method of generating intermediate symbols from the source symbols and a corresponding inverted method for generating source symbols from intermediate symbols . Input symbols represent data that is recorded in one or more steps during the encoding or decoding process, and output symbols represent data that is sent from one or more steps during the encoding or decoding process. [0035] In many embodiments, these different types or labels of symbols can be the same or at least partially made up of other types of symbols, and in some instances the terms are used interchangeably. In one example, a file is assumed to be transmitted and is a 1,000 character text file, each of which is considered a source symbol. If these 1,000 source symbols are provided as such for an encoder, this, in turn, results in encoded symbols that are transmitted, the source symbols are also input symbols. However, in the modalities where the 1,000 source symbols are, in a first stage, converted into 1,000 (or more or less) intermediate symbols and the intermediate symbols are provided to the encoder to generate encoded symbols in a second stage, the source symbols are the input symbols and the intermediate symbols are the output symbols in the first step, and the intermediate symbols are the input symbols and the coded symbols are the output symbols in the second step, while the source symbols are the input symbols general for this two-step encoder and the encoded symbols are the general output symbols for this two-step encoder. If, in this example, the encoder is a systematic encoder, then the encoded symbols can comprise source symbols together with repair symbols generated from the intermediate symbols, whereas the intermediate symbols are distinct from both the source symbols and the encoded symbols . If, instead, in this example, the encoder is a non-systematic encoder, then the intermediate symbols can comprise the source symbols together with redundant symbols generated from the source symbols, using, for example, an LDPC and / or HDPC encoder in the first step, whereas the encoded symbols are distinct from both the source symbols and the intermediate symbols. [0036] In other examples, there are many symbols and each symbol represents more than one character. In any case, where there is a conversion from source to intermediate symbol on a transmitter, the receiver can have a symbol to intermediate conversion to corresponding source as inversion. [0037] Transmission is the process of transmitting data from one or more senders to one or more recipients through a channel in order to distribute a file. A sender is also sometimes referred to as an encoder. If a sender is connected to any number of recipients by a perfect channel, the data received can be an exact copy of the source file, since all data will be received correctly. Here, the channel is considered not to be perfect, which is the case for most real channels. Of the many channel imperfections, two imperfections of interest are data deletion and incomplete data (which can be treated as a special case of data deletion). Data deletion occurs when the channel loses data. Incomplete data occurs when a recipient does not start receiving the data until some of the data has already passed through it, the recipient stops receiving the data before the end of the transmission, the recipient chooses to receive only a part of the transmitted data and / or the recipient stops and starts, intermittently, to receive data. With an example of incomplete data, a sender via mobile satellite may be transmitting data representing a source file and begin transmission before a recipient is in range. Once the recipient is within range, data can be received until the satellite moves out of range, at which point the recipient can redirect its antenna (during which time it will not be receiving data) to start receiving data about the same input file being transmitted by the other satellite that moved into the range. As it should be apparent from reading this description, incomplete data is a special case of data deletion, since the recipient can treat the incomplete data (and the recipient has the same problems) as if the recipient were in the entire range. time, but the channel lost all data to the point where the recipient started receiving the data. In addition, as is well known in the design of communication systems, detectable errors can be considered equivalent to eliminations simply by eliminating all data blocks or symbols that have detectable errors. [0038] In some communication systems, a recipient receives data generated by multiple senders, or by a sender using multiple connections. For example, to speed up a download, a recipient can simultaneously connect to more than one sender to transmit data for the same file. As another example, in a multicast transmission, multiple multicast data streams can be transmitted to allow recipients to connect to one or more of these streams to match the transmission rate aggregated to the channel bandwidth connecting them to the sender. In all such cases, a concern is to ensure that all data transmitted is for independent use by one recipient, that is, that data from multiple sources is not redundant between streams, even when transmission rates are widely different for different sequences, and when there are arbitrary loss patterns. [0039] In general, a communication channel is the one that connects the sender to the recipient for data transmission. The communication channel can be a real-time channel, where the channel moves data from the sender to the recipient as the channel obtains the data, or the communication channel can be a storage channel that stores some or all the data in your transit from the sender to the recipient. An example of the latter is disk storage or another storage device. In this example, a program or device that generates data can be considered a sender, transmitting the data to a storage device. The recipient is the program or device that reads the data from the storage device. The mechanisms that the sender uses to obtain the data on the storage device, the storage device itself and the mechanisms that the recipient uses to obtain the data from the storage device collectively form the channel. If there is a chance that these mechanisms or the storage device will lose data, then this will be treated as eliminating data on the communication channel. [0040] When the sender and the recipient are separated by a communication channel in which the symbols can be eliminated, it is preferable not to transmit an exact copy of an input file, but instead to transmit data generated from the input file that assists with the recovery of the eliminations. An encoder is a circuit, device, module, or code segment that handles this task. One way to visualize the operation of the encoder is that the encoder generates encoded symbols from source symbols, where a sequence of the source symbol values represents the input file. Each source symbol will then have a position, in the input file, and a value. A decoder is a circuit, device, module or code segment that reconstructs the source symbols from the coded symbols received by the recipient. In multi-stage coding, the encoder and decoder are sometimes further divided into sub-modules, each performing a different task. [0041] In the modalities of multi-stage coding systems, the encoder and decoder can be further divided into sub-modules, each performing a different task. For example, in some embodiments, the encoder comprises what is referred to here as a static encoder and a dynamic encoder. As used here, a "static encoder" is an encoder that generates a number of redundant symbols from a set of source symbols, where the number of redundant symbols is determined before encoding. When static encoding is used in a multistage encoding system, the combination of the source symbols and the redundant symbols generated from the source symbols using a static encoder is often referred to as intermediate symbols. Examples of potential static coding codes include Reed-Solomon codes, Tornado codes, Hamming codes, LDPC codes such as LDPC IETF codes, etc. The term "static decoder" is used here to refer to a decoder that can decode data that has been encoded by a static encoder. [0042] As used here, a "dynamic encoder" is an encoder that generates encoded symbols from a set of input symbols, where the number of possible encoded symbols is independent of the number of input symbols, and where the number of encoded symbols a generated need to be fixed. Often in a multi-stage code, the input symbols are the intermediate symbols generated using a static code and the encoded symbols are generated from intermediate symbols using a dynamic encoder. An example of a dynamic encoder is a chain reaction encoder, such as the encoders taught in Luby I, Luby II. The term "dynamic decoder" is used here to refer to a decoder that can decode data that has been encoded by a dynamic encoder. [0043] In some modalities, the encoding which is a multi-stage and systematic code uses a decoding process applied to the source symbols to obtain intermediate symbol values based on the relationships defined by the static encoder among the intermediate symbols and defined by the dynamic encoder between the intermediate symbols and the source symbols, and then a dynamic encoder is used to generate additional encoded symbols, or repair symbols, from intermediate symbols. Similarly, a corresponding decoder has a decoding process to receive encoded symbols and decode from them intermediate symbol values based on the relationships defined by the static encoder among the intermediate symbols and defined by the dynamic encoder between the intermediate symbols and the received encoded symbols, and then a dynamic encoder is used to generate any missing source symbols from the intermediate symbols. [0044] The modalities of multi-stage encoding need not be limited to any particular type of symbol. Typically, the values for the symbols are selected from an alphabet of 2M symbols for some positive integer M. In such cases, a source symbol can be represented by a sequence of M bits of data from the input file. The value of M is often determined based, for example, on the use of the application, communication channel and / or the size of the encoded symbols. In addition, the size of an encoded symbol is often determined based on the application, the channel and / or the size of the source symbols. In some cases, the encoding process can be simplified if the encoded symbol values and the source symbol values are the same size (that is, represented by at least a number of bits or selected from the same alphabet). If this is the case, then the size of the source symbol value is limited when the size of the encoded symbol value is limited. For example, it may be desirable to place the encoded symbols in packages of limited size. If some data about a key associated with the encrypted symbols is transmitted in order to retrieve the key at the receiver, the encrypted symbol is preferably small enough to accommodate, in a packet, the encrypted symbol value and the data about the key. [0045] As an example, if an input file is a multi-megabyte file, the input file can be divided into thousands, tens of thousands, or hundreds of thousands of source symbols with each source symbol encoding thousands, hundreds, or just a few bytes. As another example, for a packet-based Internet channel, a packet with a payload the size of 1024 bytes may be adequate (one byte is equal to 8 bits). In this example, assuming that each packet contains an encoded symbol and 8 bytes of auxiliary information, an encoded symbol size of 8128 bits ((1024-8) * 8) would be adequate. In this way, the font symbol size can be chosen as M = (1024-8) * 8, or 8128 bits. As another example, some satellite systems use the MPEG packet standard, where the payload of each packet comprises 188 bytes. In this example, assuming that each packet contains an encoded symbol and 4 bytes of auxiliary information, an encoded symbol size of 1472 bits ((188-4) * 8) would be adequate. Thus, the size of the font symbol can be chosen as M = (188-4) * 8, or 1472 bits. In a general purpose communication system using multistage encoding, application-specific parameters, such as font symbol size (ie, Μ, the number of bits encoded by a source symbol) can be variables configured by the application. [0046] Each encoded symbol has a value. In a preferred modality, which is considered below, each encoded symbol also has an identifier called its "key" associated with it. Preferably, the key for each encoded symbol can be easily determined by the recipient to allow the recipient to distinguish an encoded symbol from other encoded symbols. Preferably, the key for an encrypted symbol is distinct from the keys for all other encrypted symbols. There are several forms of switching discussed in the prior art. For example, Luby I describes several forms of switching that can be employed in the embodiments of the present invention. In other preferred embodiments, such as one described in Appendix A, the key to an encoded symbol is referred to as an "Encoded Symbol Identifier" or "Encoding Symbol Identifier" or more simply "ESI". [0047] Multistage encryption is particularly useful where there is an expectation of data deletion or where the recipient does not start and end reception just when a transmission starts and ends. The latter condition is referred to here as "incomplete data". With respect to elimination events, multistage coding shares many of the chain reaction coding benefits taught in Luby I. In particular, multistage encoded symbols are additives of information, so that any suitable number of packets can be used to retrieve an input file to a desired degree of accuracy. These conditions do not adversely affect the communication process when multi-stage encoding is used, since the encoded symbols generated with multi-stage encoding are information additives. For example, if a hundred packets are lost due to a burst of noise causing data deletion, an additional 100 packets can be collected after the burst to replace the loss of the dropped packets. If a thousand packets are lost due to the fact that a receiver did not tune in to a transmitter when the transmission started, the receiver can only collect those thousand packets from any other transmission period, or even from another transmitter. With multi-stage encoding, a receiver is not restricted to collecting only the particular set of packets, so it can receive some packets from one transmitter, switch to another transmitter, lose some packets, miss the start or end of a given transmission and still recover an input file. The ability to come together and leave a transmission without receiver-transmitter coordination helps to simplify the communication process. [0048] In some embodiments, the transmission of a file using multistage encoding may include the generation, formation or extraction of source symbols from an input file, computing redundant symbols, encoding source and redundant symbols into one or more encoded symbols, where each symbol encoded is generated based on its key independently from all other encoded symbols, and transmitting encoded symbols to one or more recipients through a channel. Additionally, in some embodiments, receiving (and reconstructing) a copy of the input file using multistage encoding may include receiving some set or subset of symbols encoded from one or more data streams, and decoding the symbols source from the values and keys of the coded symbols received. Systematic Codes and Non-Systematic Codes [0049] A systematic code is a code where the source symbols are among the encoded symbols that can be transmitted. In this case, the encoded symbols are made up of source symbols and redundant symbols, also called repair symbols, generated from the source symbols. A systematic code is preferable over a non-systematic code for many applications, for a variety of reasons. For example, in a file distribution application, it is difficult to be able to start transmitting data in sequential order while the data is being used to generate repair data, where the repair data generation process can take some time. As another example, many applications prefer to send the original source data in. sequential order in its unmodified form to one channel, and to send repair data to another channel. A typical reason for this is the support of both legacy receivers that do not incorporate FEC decoding while, at the same time, providing a better experience for the improved receivers that do not incorporate FED decoding, where legacy receivers join only the video channel. source data and improved receivers join both the source data channel and the repair data channel. [0050] In these related types of applications it may sometimes be the case that the loss patterns and the fraction of loss between the source symbols received by a receiver is quite different from that experienced among the received repair symbols. For example, when source symbols are sent before repair symbols, due to channel burst loss conditions, the fraction and loss pattern between source symbols can be quite different from the corresponding fraction and loss pattern between repair symbols. , and the pattern of loss between the source symbols can be very different from what would be typical than if the loss were uniformly random. As another example, when source data is sent on one channel and repair data on another channel, there may be very different loss conditions on the two channels. Thus, it is desirable to have a systematic FEC code that works well. under different types of loss conditions. [0051] Although the examples presented here refer to systematic codes (where the output or encoded symbols include source or input symbols) or non-systematic codes, the teachings presented here should be considered applicable to both, unless otherwise indicated. Shokrollahí III teaches methods for converting a non-systematic chain reaction code to a systematic code in such a way that the robustness properties of the non-systematic code are maintained by the systematic code constructed in this way. [0052] In particular, using the methods taught in Shokrollahi III, the systematic code built has the property that there is little differentiation in terms of recoverability by the decoder between the lost source symbols and the lost repair symbols, that is, the probability of decoding recovery is essentially the same for a given amount of total loss almost independent of the loss ratio between the source symbols compared to the loss ratio between the repair symbols. In addition, the loss pattern between the encoded symbols does not significantly affect the probability of decoding recovery. In comparison, for the construction of other systematic codes, such as those described for Tornado codes or for LDPC IETF codes, there is in many cases a strong differentiation in terms of recoverability by the decoder between the lost source symbols and the repair symbols lost, that is, the probability of decoding recovery can vary widely for them for a given amount of total loss depending on the proportion of the loss between the source symbols compared to the proportion of the loss between the repair symbols. In addition, the loss pattern between the encoded symbols can have a strong effect on the probability of decoding recovery. Tornado codes and LDPC IETF codes have reasonably good recovery properties if the losses of the encoded symbols are uniformly random among all the encoded symbols, but the recovery properties deteriorate as the loss model deviates from the uniform random loss. Thus, in this sense, the modalities taught in Shokrollahi III have advantages over other constructions of systematic codes. [0053] For an FEC code with the property that there is a strong effect in terms of recoverability by the decoder depending on the proportion of hanging source symbols and hanging replica symbols, and depending on the overhang patterns, an approach to overcome this pnopniness when applicable is to send the encoded symbols in a uniformly random order, that is, the combination of source and repair symbols are sent in uniformly random order, and in this way, the source symbols are randomly interleaved between the repair symbols. The sending of encoded symbols in random order has an advantage of whatever the channel loss model, if the losses are by bursts or uniformly random or some other type of loss, the losses of the encoded symbols are still random. However, as noted above, this approach is not desirable for some applications, for example, for applications where it is desirable to send the source symbols in sequence before the repair symbols, or where the source symbols are sent on a different channel than the symbols of repair. [0054] In such cases, systematic code constructions where the loss pattern between the encoded symbols do not affect the decoder recovery properties are desirable and some examples are provided here. [0055] As used here, "random" and "pseudo-random" are often equivalent and / or interchangeable and may depend on context. For example, random losses can refer to symbols that are lost by a channel, which can truly be a random event, when The step of a random selection of symbol neighbors may actually be a pseudo-random selection that can be repeated according to a non-randomization process, but it has the same or similar properties or behaviors that would be the case with a truly random selection. unless stated otherwise explicitly or in context, characterizing something as random does not mean excluding pseudo-randomness. [0056] In an approach to such a systematic FEC encoder, source symbols are obtained by an encoder that includes multiple encoder sub-blocks or sub-processes, one of which operates as a decoder to generate intermediate symbols that are input symbols for another sub-block or sub-process. The intermediate symbols are then applied to another sub-block or sub-process that encodes the intermediate symbols into encoded symbols so that the encoded symbols include source symbols (along with additional redundant symbols) generated from a consistent process, thereby providing benefits robustness and other benefits over an encoded that is a systematic encoder that uses one process (for example, copy) to obtain source symbols for the encoded symbol set and another process to obtain redundant symbols for the encoded symbol set. [0057] The output encoding can be a chain reaction encoder, a static encoder or other variations. Annex A describes a systematic code modality. After reading this description, those skilled in the art should be able to easily extend the teachings of Shokrollahi III to apply to systematic codes such as Tornado codes and LDPC IETF codes, to result in new versions of these codes that are also systematic codes, but they have better recovery properties. In particular, new versions of these codes, obtained by applying the general method described below, are improved to have the properties that the loss ratio between the source symbols compared to the loss ratio among the repair symbols does not significantly affect the probability of decoding recovery, and, in addition, the loss pattern does not significantly affect the probability of decoding recovery. Thus, these codes can be effectively used in the orders described above that require the use of systematic FEC codes with recovery properties that are strongly affected by different amounts of fractional loss between the source symbols and the repair symbols or by different loss patterns. . [0058] The new coding method can generally be applied to coding for systematic FEC codes, non-systematic FEC codes, fixed rate FEC codes and chain reaction FEC codes to result in a general coding method for new improved systematic FEC codes. There is also a corresponding new decoding method that can be applied. Decoder Example in Encoder [0059] An example of a decoder in an encoder will now be provided. [0060] Consider the E coding method as a coding method used by an encoder (on a transmitter or elsewhere) for a fixed rate FEC E (non-systematic or systematic) code that generates N symbols encoded from K source symbols, where N is at least K. Similarly, let the decoding method E be the corresponding decoding method for the FEC E code, used by a decoder at a receiver or elsewhere. [0061] Assuming that the FEC code E has the property of a random set of K from N encoded symbols and is sufficient to retrieve the original K symbols with reasonable probability of using the E decoding method, where the reasonable probability can , for example, be the probability of ½. The reasonable probability may be some requirement, determined by use or application and may be a value beyond. It should be understood that the construction of a particular code does not have to be specific to a particular recovery probability, but these applications and systems can be designed for your particular level of robustness. In some cases, the likelihood of recovery can be increased by considering more than the K symbols, and then the determination using a process of decoding a set of K symbols from those considered symbols that allow successful decoding. [0062] Assuming that for the FEC E code, an ESI is associated with each encoded symbol and that ESI identifies that encoded symbol. Without loss of generality, ESIs are labeled here with 0, 1, 2, ..., N-1. [0063] (1) permuta aleatória de N ESIs associados com o código FEC E para chegar ao código FEC E o conjunto ESI permutado X(0),...,X(N-1), onde esse conjunto ESI permutado é organizado de tal forma que os símbolos fonte K do código FEC E podem ser decodificados a partir dos primeiros símbolos K do código FEC E com relação à ordem de permuta de ESIs X(0),...,X(K-1); (2) para cada i=0,...,N-1, associar SI i ao código FEC F com ESI X(i) de código FEC E; (3) para cada i=0,...,K-1, determinar o valor de código FEC de símbolo codificado E com ESI X (i) para o valor de símbolo fonte C(i); (4) aplicar o método de decodificação E aos símbolos fonte C(0),....,C(K-1) com ESIs X(C),...,X(K-1) de código FEC E correspondente para gerar símbolos decodificados E(0),...,E(K-1) e (5) aplicar o método de codificação E aos símbolos decodificados E(0),...,E(K-1) para gerar símbolos codificados D(0),...,D(N-1) E de código FEC com ESIs 0,...,N-1 de código FEC associado; (6) os símbolos codificados para o método de codificação F com ESIs 0, 1, ..., N-1 são D(X(0)), D(X(1)),...,D(X(N-1)). In a modality of a systematic F coding method for a systematic FEC code F generated using the methods for the FEC code E, K and N are input parameters. The source symbols for the FEC F code will have ESIs 0, ..., K-1 and the repair symbols for the FEC F code will have ESIs K, ..., N-1. The systematic encoding method F for FEC code F generates N symbols encoded from K source symbols C (0), ..., C (K-1), using the E encoding method and E decoding method for FEC code And, performed by hardware and / or software as follows: (1) random exchange of N ESIs associated with the FEC code E to arrive at the FEC code E of the exchanged ESI set X (0), ..., X (N-1), where that exchanged ESI set is organized in such a way whereas the K source symbols of the FEC code E can be decoded from the first K symbols of the FEC code E with respect to the exchange order of ESIs X (0), ..., X (K-1); (2) for each i = 0, ..., N-1, associate SI i with the FEC code F with ESI X (i) with code FEC E; (3) for each i = 0, ..., K-1, determine the value of the FEC coded symbol value E with ESI X (i) for the value of the source symbol C (i); (4) apply the decoding method E to the source symbols C (0), ...., C (K-1) with ESIs X (C), ..., X (K-1) of corresponding FEC code E to generate decoded symbols E (0), ..., E (K-1) and (5) apply the encoding method E to the decoded symbols E (0), ..., E (K-1) to generate encoded symbols D (0), ..., D (N-1) E of FEC code with ESIs 0, ..., N-1 of associated FEC code; (6) the symbols encoded for the F encoding method with ESIs 0, 1, ..., N-1 are D (X (0)), D (X (1)), ..., D (X ( N-1)). [0064] Note that the output of the F encoding method is N encoded symbols, of which the first K are the source symbols C (0), ..., C (K-1) with ESIS 0.1, ... , K-1 associates. In this way, the F encoding method produces a systematic encoding of the source data. [0065] (1) permuta aleatória dos N ESIs associados com o código FEC E para chegar ao conjunto ESI permutado X(0),...,X(N-1) de código FEC E, onde esse conjunto ESI permutado é organizado de tal forma que os símbolos fonte K do código FEC E possam ser decodificados a partir dos primeiros K símbolos codificados do código FEC E com relação à ordem de permuta dos ESIs X(0),...,X(K-1); (2) para cada i=0,...,N-1 ESI i associado do código FEC F com ESI X(i) do código FEC E; (3) para cada i=0,...,K-1 o conjunto de valor do símbolo codificado de código FEC E com ESI X (i) para o valor do símbolo fonte C(i); (4) aplicar o método de decodifícação E para os símbolos fonte C(0),...,C(K-1) com ESIs X(0),...,X(K-1) de código FEC E correspondente para gerar os símbolos decodificados E(0),...,E(K-1), e (5) aplicar o método de codificação E aos símbolos decodifiçados E(0),...,E(K-1) para gerar símbolos D(0),...,D(N-1) símbolos codificados de código FEC E com ESIs 0,...,N-1 de código FEC associados; (6) os símbolos codificados para o método de codificação F com ESIs 0, 1,..., N-1 são D(X(0)), D(X(1)),...,D(X(N-1)). One modality of an F decoding method that corresponds to the F encoding method just described is as follows, where K and N are input parameters for that method that are used throughout. This F decoding method retrieves K source symbols C (0), ..., C (K-1) from K received encoded symbols D (0), ..., D (K-1) with ESIs Y ( 0), ..., Y (K-1) of associated FEC F code. The symbols received do not have to be exactly the same as the symbols sent. The method, performed by the hardware and / or software is as follows: (1) random exchange of the N ESIs associated with the FEC code E to arrive at the exchanged ESI set X (0), ..., X (N-1) of the FEC code E, where that exchanged ESI set is organized in such a way that the K source symbols of the FEC E code can be decoded from the first K encoded symbols of the FEC E code with respect to the exchange order of the ESIs X (0), ..., X (K-1); (2) for each i = 0, ..., N-1 ESI i associated with the FEC F code with ESI X (i) with the FEC E code; (3) for each i = 0, ..., K-1 the value set of the coded symbol of code FEC E with ESI X (i) for the value of the source symbol C (i); (4) apply the decryption method E for the source symbols C (0), ..., C (K-1) with ESIs X (0), ..., X (K-1) of corresponding FEC E code to generate the decoded symbols E (0), ..., E (K-1), and (5) apply the encoding method E to the decoded symbols E (0), ..., E (K-1) to generate symbols D (0), ..., D (N-1) encoded symbols of FEC code And with ESIs 0, ..., N-1 of associated FEC code; (6) the symbols encoded for the F encoding method with ESIs 0, 1, ..., N-1 are D (X (0)), D (X (1)), ..., D (X ( N-1)). [0066] Note that the output of the F encoding method is N encoded symbols, of which the first K are the source symbols C (0), ..., C (K-1), with ESIs 0, 1, .. ., K-1 associates. In this way, the F encoding method produces a systematic encoding of the source data. [0067] (1) permuta aleatória de N ESIs associados com o código FEC E para chegar ao conjunto ESI X(0),...,X(N-1) permutado de código FEC E, onde esse conjunto ESI permutado é organizado de tal forma que os K símbolos fonte do código FEC E possam ser decodificados a partir dos primeiros K símbolos codificados do código FEC E com relação à ordem de permuta dos ESIs X(0),...,X(K-1). (2) aplicar o método de decodificação E aos símbolos codificados D(0),...,D(K-1) com ESIs Y(0),...,X(Κ-1) de código FEC E associados para gerar os símbolos decodificados E(0),...,E(K-1) . (3) utilizar o método de codificação E, gerar os símbolos codificados C(0),...,C(K-1) com ESIs X(0),...,X(Κ-1) de código FEC E de E(0),...,E(K-1). (4) os símbolos fonte decodificados do código FEC F com ESIs 0,...,K-1 são C(0),...C(K-1). One modality of an F decoding method that corresponds to the F encoding method just described is as follows, where K and N are input parameters for that method that are used throughout. This F decoding method retrieves K source symbols C (0), ..., C (K-1) from the received K encoded symbols D (0), ..., D (K-1) with ESIs Y ( 0), ..., Y (K-1) of associated FEC F code. The symbols received do not have to be exactly the same as the symbols sent. The method, carried out by hardware and / or software is as follows: (1) random exchange of N ESIs associated with the FEC code E to arrive at the ESI set X (0), ..., X (N-1) exchanged from the FEC code E, where that exchanged ESI set is organized in such a way that the K source symbols of the FEC E code can be decoded from the first K coded symbols of the FEC E code with respect to the exchange order of the ESIs X (0), ..., X (K-1). (2) apply the decoding method E to the encoded symbols D (0), ..., D (K-1) with ESIs Y (0), ..., X (Κ-1) of associated FEC code E for generate the decoded symbols E (0), ..., E (K-1). (3) use the E coding method, generate the coded symbols C (0), ..., C (K-1) with ESIs X (0), ..., X (Κ-1) of FEC code E of E (0), ..., E (K-1). (4) the source symbols decoded from the FEC F code with ESIs 0, ..., K-1 are C (0), ... C (K-1). [0068] The methods and equipment that operate as just described have some desirable properties. For example, consider an FEC E code which is a systematic code and has the property of a random set of K encoded symbols received and can be decoded with high probability, but it also has the property of when the K encoded symbols are received and the proportion of source symbols among the received coded symbols is not close to K / N, so they cannot be decoded with high probability. In this case, the modality describes a new FEC F code that uses the FEC E code encoding and de-coding methods, and the new FEC F code has the desirable property to decode with high probability from a set of K encoded symbols received, regardless of the proportion of received encoded symbols that are source symbols. [0069] There are many variants of the above modality. For example, in step (1) of the F coding method, the random exchange of ESIs can be pseudo-random or based on some other method that produces a good selection of ESIs, but it is neither random nor pseudo-random. In case the FEC E code is a systematic code, it is preferable that the fraction of the first K ESIs in the exchange selected in step (1) among the systematic ESIs is proportional to the FEC E code rate, that is, proportional to K / N . It is preferable that the random choices of ESIs made by the new F coding method in step (1) can be represented by a succinct amount of data, for example, by a seed for a well-known or agreed pseudo random generator along with an agreed method for choose the ESIs based on the seed and how the pseudorandomic generator works, so that a new deodeification method F can perform exactly the same choice of ESI exchange in step (1) based on the same seed and pseudorandomic generator and methods for generating ESIs . In general, it is preferable if the process used in the new F encoding method in step (1) to generate the ESIs sequence and the process used in the new F decoding method in step (1) to generate the ESIs sequence both generate the same sequence of ESIs, to ensure that the new F-decode method is the reverse of the new F-encoding method. [0070] There are other variations as well, where, for example, explicit ESIs are not used, but, instead, the unique identifier of a coded symbol is by its position in relation to other coded symbols, or by other means. [0071] In the above description, the original ESIs of the EEC E code are remapped by the FEC F code so that the ordered set of source symbols is assigned to the ESIs 0, ..., K-1 in consecutive order, and the repair symbols are designated ESIs K, ..., N-1. Other variations are possible, for example, the remapping of ESIs can occur on a sender after the F encoding method has generated the encoded symbols, but before the encoding symbols are transmitted, and inverted remapping of the ESIs can occur on a receiver to the as the encoded symbols are received, but before the encoded symbols are processed to retrieve the original source symbols by the F decoding method. [0072] As another variation, in step (1) of the new F encoding method, the exchange can be selected first by selecting the ESIs of code FEC E K + A, where A is a value that is chosen to guarantee the ability to decode with high probability , and then during a simulation of the decoding process it is determined which of the K + A ESIs K are actually used during decoding, and the selected exchange can select K ESIs actually used during decoding from the initial set of K + A ESIs to be the first K ESIs of the exchange. Similar variations apply to the new F decoding method. [0073] As another variation of the F encoding method, a seed that is used to generate the random exchange is pre-computed to a value of K to ensure that the first K encoded symbols of the FEC E code associated with the exchange of ESIs produced in step ( 1) are decodable, and then this seed is always used for K in step (1) of the encoding method F and corresponding decoding method F to generate the exchange in step (1). The methods for choosing such a seed include the random selection of seeds until one is found that guarantees the decoding capacity in step (1) and then the selection of that seed. Alternatively, the seed can be generated dynamically with these properties by the F encoding method and then that seed can be communicated to the F decoding method. [0074] Like another variation of the F encoding method, a partial exchange can be selected in step (1), that is, not all ESIs need to be generated in step (1) of the new F encoding method, and not all encoded symbols need to be generated. be generated if not required in steps (5) and (6), for example, since they correspond to the same source symbols that are part of the encoded symbols, or since less than N encoded symbols need to be generated. In other variations, not all of the symbols encoded in steps (3) and (4) of the new decoding method F need to be recomputed, since some of the received encoded symbols may correspond to some source symbols that are being retrieved. Similarly, in step (2) of the new decoding method F, not all KE (0), ..., E symbols (K-1) need to be decoded, for example, if some of the symbols decoded in step ( 2) are not necessary in the subsequent steps to generate encoded symbols. [0075] The methods and modalities described above have many applications. For example, the F encoding method and the F decoding method and its variations can be applied to Tornado codes and LDPC IETF codes to provide improved reception overhead and decoding failure probability performance. In general, these new methods apply to any fixed rate FEC code. Variations of these new methods can also be applied to FEC codes that do not have any fixed rate, that is, to FEC codes such as chain reaction codes where the number of encoded symbols that can be generated is independent of the number of source symbols. [0076] Shokrollahi III contains similar lessons for creating systematic encoding and decoding methods for chain reaction codes. In some modalities, the methods of encoding and decoding E used for these codes are those taught in Luby I, Luby II, Shokrollahi I, Shokrollahi II, Luby III, Shokrollahi IV. To describe systematic encoders, it is often sufficient to describe the E encoding method and the E decoding method and use the general principles described above and known from these references to transform these methods into systematic F encoding methods and decoding methods. systematic F. It should therefore be apparent to those versed in the technique, by reading that description and the references cited, how to use the teachings to describe the E coding methods and the E decoding methods and apply them to the coding methods systematic F and systematic F decoding methods, or similar. Deactivation [0077] Decoding by deactivation, as taught in Shokrollahi II, is a general method that can be applied in combination with propagation whenever solving a set of unknown variables from a set of known linear equation values, and is particularly beneficial when the implementation of efficient coding and decoding methods that are based on sets of linear equations. In order to distinguish between decryption by deactivation as described in Shokrollahi II and decoding by permanent deactivation as described here below, "immediate" deactivation (abbreviated as "OTF deactivation" in some places) is used to refer to methods and teachings of Shokrollahi II, while "permanent deactivation" is used to refer to methods and teachings presented here where deactivations are selected beforehand. [0078] A sure-cut decoding tenet is that, whenever possible during the decoding process, the decoder can use an (possibly reduced) equation that depends on a remaining unknown variable to solve that variable, and that equation is therefore associated with that variable, and then reduce the remaining unused equations by eliminating the dependency of those equations on the solved variable. Such propagation of simple certainty based on the decoding process was used, for example, in some of the modalities of the Tornado codes, the chain reaction codes as described in Luby I, Luby II, Shokrollahi I, Shokrollahi II, Luby III, Shokrollahi IV , and LDPC IETF codes. [0079] Decoding by OTF deactivation occurs in multiple phases. In a first phase of an OTF decryption decoding method, whenever the sure-propagation decoding process cannot continue due to the fact that there is no remaining equation that depends on just one unknown variable remaining, the decoder will "deactivate" OTF "one or more unknown variables and consider the same" resolved "with respect to the process of propagation of certainty and eliminated from the remaining equations (even if they are not actually), possibly thus allowing the process of decoding of propagation of certainty to continue . The variables that are deactivated OTF during the first phase are then solved, for example, using Gaussian elimination or more efficient methods in terms of computation, for example, in a second phase, and then in a third phase, the values of these OTF deactivated variables are used to fully solve the variable associated with the equations during the first decoding phase. [0080] OTF deactivation decoding, as taught in more detail in Shokrollahi II, can be applied to many other types of codes in addition to chair reaction codes. For example, it can be applied to the general LDPC and LDGM code chain, in particular to LDPC IETF codes and Tornado codes, resulting in improvements in reliability (decreasing the probability of decoding failure) and / or CPU and / or performance. memory (increasing the encoding and / or decoding speed and / or reducing the required memory size and / or access standard requirements) for these types of codes. [0081] Some of the variations in the chain reaction code modalities in combination with OTF deactivation decoding are described in Shokrollahi IV. Other variations are described in the present application. System Overview [0082] Figure 1 is a block diagram of a communication system 100 that uses multi-stage coding. It is similar to that illustrated in Shokrollahi I, but in this case encoder 115 takes into account a designation of which intermediate symbols are "permanently disabled" and operates differently on those intermediate symbols than on intermediate symbols that are not permanently disabled during the dynamic encoding process . Likewise, decoder 155 also takes the permanently disabled intermediate symbols into account during decoding. [0083] As shown in figure 1, K source symbols (C (0), ..., C (K-1)) are recorded in encoder 115 and, if the decoding is successful with the symbols that become available to the decoder 155, then decoder 115 can send a copy of these K source symbols. In some modalities, a sequence is analyzed in blocks of K symbols and in some modalities, a file of some number of source symbols greater than K is divided into symbol blocks of size K and transmitted. In some embodiments, where a block size of K '> K is preferred., K'-K padding symbols can be added to the K source symbols. These padding symbols can have values equal to 0, or any other fixed value that is known to both encoder 115 and decoder 155 (i.e., otherwise capable of being determined in decoder 155). It should be understood that encoder 115 may comprise multiple encoders, modules or the like, and this may also be the case with decoder 155. [0084] As illustrated, encoder 115 also receives a sequence of dynamic keys from a dynamic key generator 120 and a sequence of static keys from static key generator 130, each of which can be driven by a random number generator. 135. The output of the dynamic key generator 120 may simply be a sequence of cardinal numbers, but this need not be the case. The operation of the key generators can be as illustrated in Shokrollahi I. [0085] It should be understood that several functional units illustrated in the figures can be implemented as hardware with specific inputs provided as input signals, or can be implemented by a processor executing instructions that are stored in an instruction memory and executed in the proper order to perform the corresponding function. In some cases, the code and processor are not always illustrated, but those skilled in the art will know how to implement such details by reading this description. [0086] Encoder 115 also receives inputs from a deactivation designator 125 and other parameters recorded in system 100 along the lines described elsewhere here. The outputs of the deactivation designator 125 may include a value, P, representing the number of intermediate symbols that are designated as "permanently deactivated" for decoding purposes (the "PI list" indicates that P of the intermediate symbols are in the list). As explained elsewhere, the intermediate symbols used for encoding processes are only the source K symbols in some modalities, but in other modalities, there is some kind of processing, conversion, encoding, decoding, etc. which generates intermediate symbols from K source symbols in addition to just copying them. [0087] Input parameters can include random seeds used by key generators and / or encoder encoding processes (described in more detail below), the number of symbols encoded to generate, the number of LDPC symbols to generate, the number of HDPC symbols to generate. generate, the number of intermediate symbols to generate, the number of redundant symbols to generate, etc. and / or some of these values are calculated from other values available for encoder 115. For example, the number of LDPC symbols to be generated can be calculated entirely from a fixed formula and the K value. [0088] The encoder 115 generates, from its inputs, a sequence of encoded symbols (B (I0), B (I1), B (I2), ...) and supplies them to a transmission module 140 that also receives the dynamic key values (I0, I1, I2, ...) from dynamic key generator 120, but this may not be necessary if there is another method of transporting this information. The transmission module 140 carries what is provided for a channel 145, possibly in a conventional manner that does not need to be described in more detail here. A receiving module 150 receives the encoded symbols and dynamic key values (where necessary). Channel 145 can be a channel through space (for transmission from one place to be received elsewhere) or a channel through time (for recording on the media, for example, for later reproduction). Channel 145 can cause the loss of some of the encoded symbols. Thus, the encoded symbols B (Ia), B (Ib), ... that the decoder 115 receives from the receiving module 150 may not be the same as the encoded symbols that the transmission modules sent. This is indicated by the different subscript indices. [0089] The decoder 155 is preferably capable of regenerating the keys used for the received symbols (keys that may differ) using the dynamic key regenerator 160, the random number generator 163 and the static key generator 165, and to receive various parameters as inputs. decoding. Some of these entries can be hardcoded (that is, recorded during the construction of a device) and some can be entries that can be changed. [0090] Figure 2 is a table of variables, sets and the like, with a summary of the annotation that is most often used in other figures and throughout this description. Unless otherwise stated, K denotes a number of source symbols for the encoder, R denotes the number of redundant symbols generated by a static encoder, and L is the number of "intermediate symbols", that is, the combination of source symbols and redundant and therefore L = K + R. [0091] As explained below, in some embodiments of a static encoder, two types of redundant symbols are generated. In a specific embodiment, used in many examples here, the first set comprises LDPC symbols and the second set comprises HDPC symbols. Without loss of generality, many examples here refer to S as the number of LDPC symbols and H as the number of HDPC symbols. There can be more than two types of redundant symbols, so that it is not necessary that R = S + H. The LDPC symbols and the HDPC symbols have different degree distributions and a person skilled in the art, upon reading this description, will observe how use redundant symbols that are not LDPC or HDPC symbols, but where the redundant symbols comprise two (or more) symbol sets where each set has a degree distribution different from the degree distributions of other sets. As is well known, the degree distribution of a set of redundant symbols refers to the degree distribution, where the degree of a redundant symbol refers to the number of source symbols on which the redundant symbol depends. [0092] P denotes the number of symbols permanently inactive among the intermediate symbols. The permanently inactive symbols are those that are designated for a particular treatment, that is, to be "left out" or "deactivated" in a certainty propagation network in order to continue with. the propagation of certainty (and then return to solve this after solving the disabled symbols), where permanently disabled symbols are distinguished from other disabled symbols by the fact that permanently disabled symbols are designated in the encoder for such treatment. [0093] N denotes the number of received symbols in which a decoding attempt is made by decoder 155, and A is the number of overhead symbols, this is the number of encoded symbols received in addition to K. Thus, A = NK. [0094] K, R, S, H, P, N and A are integers, typically all greater than or equal to one, but in specific modalities, some of these may be equal to one or zero (for example, R = 0 is the case in which there are no redundant symbols and P = 0 is included in the case of Shokrollahi II, where there is only OTF deactivation). [0095] The source symbol vector is denoted by (C (0), ..., C (K-1)), and the redundant symbol vector is denoted by (C (K), ..., C (L-1) )). Therefore, (C (0), ..., C (L-1)) denotes the vector of the intermediate symbols, in the systematic case, A number, P, of these intermediate symbols is designated as "permanently inactive". A "PI list" indicates which intermediate symbols are permanently inactive. In many modalities, the PI list simply points to the last P intermediate symbols, that is, C (L-P), ..., C (L-1), but this is not a requirement. This case is considered only to simplify the remaining parts of that description. [0096] Intermediate symbols that are not on the PI list are referred to as "LT intermediate symbols" here. In the example, the intermediate symbols LT can be C (0), ..., C (L-P-1). D (0), ..., D (N-1) denote the encoded symbols received. [0097] It should be noted that where a set of values is described as "N (0), ..., N (x)" or similar, it should be considered that this requires at least three values, since it is not intended to exclude the case where there is only one or two values. Encoding Method Using Permanent Deactivation [0098] Figure 3 is a block diagram of a specific embodiment of encoder 115 shown in figure 1. As illustrated here, the source symbols are stored in an input store 205 and supplied to a static encoder 210 and a dynamic encoder 220, which also receives key entries and other entries. Static encoder 210 may include internal storage 215 (memory, store, virtual memory, record store, etc.) for storing internal values and program instructions. Likewise, dynamic encoder 220 may include internal storage 225 (memory, store, virtual memory, record store, etc.) for storing internal values and program instructions. [0099] In some embodiments, a redundancy calculator 230 determines the number R of redundant symbols to create. In some modalities, the static encoder 210 generates two distinct sets of redundant symbols and in a specific modality, the first set are the first S redundant symbols, that is, symbols C (K), ..., C (K + S- 1) and are LDPC symbols, while the second set are the next H redundant symbols, that is, C (LH), ..., C (L-1) and are HDPC symbols. If the PI list represents the last redundant symbols P, then all redundant symbols H can be in the PI list (if P ≥ H) or all redundant symbols P can be HDPC symbols (if P <H). [0100] The operations that lead to the generation of these two sets of symbols can be quite different. For example, in some modalities described below, the operations for generating redundant LDPC symbols are binary operations and the operations for generating HDPC symbols are non-binary. [0101] The operation of dynamic encoder 220 is explained in more detail in figure 4. According to one embodiment, dynamic encoder 220 comprises two encoders, a PI 240 encoder and an LT 250 encoder. In some embodiments, the LT 250 encoder is an encoder chain reaction and the PI 240 encoder is a chain reaction encoder of a particular type. In other embodiments, these two encoders may be very similar, or the PI 240 encoder is not a chain reaction encoder. No matter how these encoders are defined, they generate symbols, where the LT 250 encoder generates its symbols from intermediate symbols LT C (0), ..., C (LP-1) which are designated as non-permanently inactive symbols , whereas the PI 240 encoder generates its symbols from permanently inactive intermediate symbols C (L-Ρ), ..., C (L-1). These two generated symbols enter the combiner 260 that generates the final coded symbol 270. [0102] In some embodiments of the present invention, some of the permanently disabled symbols may participate in the LT encoding process, and some of the symbols that are not permanently disabled symbols may participate in the PI encoding process. In other words, the PI list and the symbol set comprising the intermediate LT symbols do not need to be separated. [0103] In preferred embodiments, the symbols provided for the combiner 260 can be the same length, and the function performed by the combiner 260 is an XOR operation on these symbols to generate the encoding symbol 270. It is, however, unnecessary for the operation of this invention. Other types of combiners can be envisioned and can lead to similar results. [0104] In other embodiments, the intermediate symbols are subdivided into more than two sets, for example, one set of LT symbols and several (more than one) sets of PI symbols, each with its associated encoder 240. Obviously, each associated encoder can be implemented as a common computing element or hardware element that operates on different instructions according to a coding process when acting as a different encoder for different sets. [0105] An illustrative operation of the PI 241 encoding process, as can be performed by the PI 240 encoder, is shown in figure 5. Using the key I_a corresponding to a coded symbol to be generated, in step 261, the encoder determines a positive weight , WP, and a list, ALP, containing WP integers between LP and L-1, inclusive. In step 263, if the list ALP = (t (0), ..., t (WP-1)), then the value of an X symbol is determined for X = C (t (0)) ⊕0 (t (1)) ⊕ ... ⊕C (t (WP-1)), where ⊕ denotes the XOR operation. [0106] In some embodiments, the weight WP is fixed to some number, such as 3, or 4 or some other fixed number. In other modalities, the WP weight may belong to a small set of possible similar numbers, such as being chosen to be equal to 2 or 3. For example, as illustrated in the Appendix A modality, the WP weight depends on the weight of the symbol generated by the LT 251 encoding process, as can be done by the LT 250 encoder. If the weight generated by the LT 250 encoder is equal to 2, then WP is chosen to be 2 or 3, depending on the key I_a, where the ratio of times in the which WP is equal to 2 or 3 is approximately equal; if the weight generated by the LT 250 encoder is greater than 3, then WP is chosen to be equal to 2. [0107] Figure 6 is an example of an LT 251 coding process according to one of the modalities of the present invention and using the teachings of Luby 1 and Shokrollahi I. In step 267, the key I_a is used to generate a weight, WL, of a list, AL, respectively. In step 269, if the list ALP = (j (0), ..., j (WL-1)), then the value of a symbol X is determined to be X = C (j ((0)) ⊕C (j (1))) ⊕ ... ⊕C (j (WL-1)). [0108] Figure 7 illustrates an operation for calculating the weight WL. As illustrated here, in step 272, a number, v, is created and is associated with the encoded symbol to be generated and can be computed based on the key I_a for that encoded symbol. It can be the index, the representative label, etc. of the encoded symbol, or a distinct number, as long as the encoders and decoders can be consistent. In this example, v is between 0 and 220, but in other examples, other ranges are possible (such as 0 to 232). The generation of v can be done in an explicit way using randomness generation tables, but the exact operation of how to generate these random numbers can vary. [0109] The encoder is considered to have access to a table M, an example of which is provided in figure 8. Table M, called the "grade distribution query" table, contains two columns and multiple rows. The left column is labeled with possible WL weight values, and the right column is labeled with integers between 0 and 220, inclusive. For any value of v, there is exactly one cell in column M [d] of the degree distribution query table where M [d-1] <v≤M [d] is true. For that one cell, there is a corresponding value in column d, and the encoder uses this as the weight WL for the encoded symbol. For example, where an encoded symbol has v = 9000,000, the weight for that encoded symbol would be WL = 7. [0110] Static encoder 210 has access to SE elements (k, j) where K = 0, ..., R-1 and j = 0, ..., L-1. This element can belong to any finite field for which there is an operation * between α elements of the field and X symbols so that α * X is a symbol, and α * (X⊕Y) = α * X⊕α * Y where ⊕ denotes the XOR operation. Such fields and operations were detailed in Shokrollahi IV. The operation of the static encoder 210 can be described as computation, for a given sequence of symbols source C (0), ..., C (K-1), a sequence of redundant symbols C (K), ..., C (1-1), satisfying the relationship illustrated in Equation 1, where Z (0), ..., Z (R-1) are known values of the encoder and decoder (for example, 0). [0111] In equation 1, the SE (k, j) records can all be binary, or some of them can belong to the GF (2) field, while others belong to other fields. For example, the corresponding matrix of the Appendix A modality is provided in figure 9. It comprises two sub-matrices, one with S rows and one with H rows. The upper submatrix comprises two parts: the submatrix comprising the last P columns in which each row has two consecutive 1's (where the positions are counted in module P). The first W = L-P columns of this matrix comprise circulating matrices followed by an SxS identity matrix. The circulating arrays comprise B of columns and each one (except, possibly for the last one) has S rows. The number of these circulating matrices is a ceiling (B / S). The columns in these circulating matrices each have exactly three 1. The first column of the circulating matrix k has 1 in positions 0, (k + 1) mod S, and (2k + 1) mod S. The other columns are cyclical changes of the first. The lower rows H in figure 9 comprise a matrix. Q with entries in GF (256) followed by an HxH identity matrix. [0112] If α denotes an element of GF (256) with minimum polynomial x8 + x4 + x3 + x2 + 1, then the matrix Q is equal to the matrix provided in figure 10. Here, Δ1, ..., ΔK + S-1 are weight columns 2 for which the positions of 2 non-zero inputs are determined in a pseudo-random manner according to the procedure outlined in Section 5.3.3.3. of Annex A. For judicious choices of values S, P and H (as provided in Appendix A), the matrix in figure 10 results in excellent recovery properties of the corresponding code. The procedure described above is exemplified in figure 11. In step 27 6, the SE matrix is initialized to 0. In step 278, an input variable S, equal to the number of LDPC symbols, is provided for the process, and the values of SE (i, j) are set to 1 for pairs (i, j) so that i = j mod S, or i = (1 + floor (j / S)) + j mod S, or i = 2 * ( 1 + floor (j / S)) + j mod S, This step takes care of the circulating matrices in figure 9. [0113] In step 280, the positions corresponding to the IS identity matrix in figure 9 are determined to one. In step 282, the positions corresponding to the PI part of the matrix in figure 9 are set to 1. These positions are the form (i, l) and (i, t) where 1 = 1 mod P et = (i + 1) mod P. In step 284, the positions corresponding to matrix Q in figure 9 are configured. Accordingly, matrix Q is provided as an additional entry for this step. In step 286, the positions corresponding to the identity matrix IH in the matrix of figure 9 are set to one. [0114] Other choices for the SE matrix are possible and depend on the particular application and the demands of the code as a whole. No matter how the matrix of Equation 1 is chosen, the task of the static encoder 210 can be accomplished in a variety of ways. For example, Gaussian elimination can be used as a process of recovering unknown values C (K), ... C (L-1) as will be apparent to those skilled in the art by reading this description. Permanent Deactivation Decoding [0115] The decoding problem can be mentioned as follows: decoder 155 has N encoded symbols B (Ia), B (Ib) ... with corresponding keys Ia, Ib. The whole set of these encoded symbols, or a subset of them, it may have been received by the decoder, while other encoded symbols may have been supplied to the decoder by other means. The decoder's objective is to retrieve the source symbols C (0), ..., C (K-1). To simplify the presentation, we denote the coded symbols received by D (0), ..., D (N-1). [0116] Many of the decoding operations can be succinctly described using matrix language and operations on such matrices in particular by solving systems of equations with such matrices. In the following description, the equations can correspond to the received coded symbols and variables can correspond to the source symbols or a combined set of source and redundant symbols generated from the source symbols, often called intermediate symbols, which must be solved based on the symbols encrypted messages received. In the specification provided as Appendix A, the encoded symbols may be referred to as "encoding symbols" (and there are other variations), but it should be apparent after reading the entire specification and the appendix as the references relate. It must be understood that the matrices and operations and solutions to the equations can be implemented as computer instructions corresponding to mathematical operations, and in fact it is not practical to carry out the operations without a computer, processor, hardware or some electronic element. [0117] Permanent deactivation is used to determine in the decoder a set of variables to deactivate, called permanently deactivated symbols or variables, before the first phase of the decoding process is started. The permanent deactivation decoding methods described below can be applied to existing codes, or codes can be specially designed to work even better with the permanent deactivation decoding. Decoding methods by permanent deactivation can be applied to solve any system of linear equations, and in particular can be applied to chain reaction codes, LDPC IETF codes and Tornado codes. [0118] Decoding by permanent deactivation is a general method that can be applied in combination with certainty propagation decoding and / or decoding by OTF deactivation whenever the solution to a set of unknown variables from a set of known linear equation values , and is particularly beneficial when implementing efficient encoding and decoding methods that are based on sets of linear equations. In a first phase, based on the structure of the known coding method or based on the received equations, a set of unknown variables is declared as permanently disabled, and the permanently disabled variables are removed from the linear equations and considered "solved" in the second phase. the decoding process (except that as the second phase linear equations are reduced, the same reductions are made on the permanently disabled variables). [0119] In the second phase, certainty propagation decoding is applied to unknown variables that are not permanently disabled using the previously described certainty propagation decoding, or OTF deactivation decoding is applied to unknown variables that are not permanently deactivated. similar to that described for the first phase of the OTF deactivation decoding method, thus producing a set of reduced encoded symbols or equations. The reduced encoded symbols or equations that result from the second phase have the property that their dependence on variables or symbols that are not deactivated has been eliminated, and therefore, the reduced encoded symbols or equations depend only on the deactivated variables or symbols. Note that original encoded symbols or equations can be maintained as well, so that both original encoded symbols and reduced encoded symbols may be available in some implementations. [0120] In a third phase, the permanently disabled variables along with any additional OTF disabled variables generated in the second phase using OTF deactivation decoding are resolved to use reduced encoded symbols or equations, for example, using Gaussian elimination, or, if any, a special structure of relations between the permanently disabled variables and the linear equations are used to solve more efficiently by using the Gaussian elimination. [0121] In a fourth phase, the values for the resolved disabled variables, OTF disabled variables or permanently disabled variables, are used in conjunction with the original encoded symbols or equations (or derived original encoded symbols or equations again) to solve the variables that have not been deactivated. . [0122] One of the advantages of decoding methods by permanent deactivation is that the number w of OTF deactivations in addition to permanent deactivations can generally be small or equal to zero and can be highly independent of the encoded symbols that are received. This can make decoding complexity consistently small regardless of which encoded symbols are received, allow for more reliable decoding, and allow for less predictable and fewer memory accesses that can be programmed more. efficient. Since there are only a small number of OTF deactivations in the second phase, and since OTF deactivations in the second phase are generally determined only during the decoding process that can make the pattern of symbol operations somewhat unpredictable, access patterns to memory are more predictable during decoding, generally allowing for more predictable efficient decoding processes. [0123] There are many variations of the above. For example, the phases can be performed in the non-sequential merged order. As another example, the deactivated symbols can, in turn, be solved in the third phase using decoding by OTF deactivation or decoding by permanent deactivation in multiple additional phases. As another example, decoding by permanent deactivation can be applied to a linear system of equations and variables that can be used for error correction codes, or elimination of correction codes, or for other applications that can be solved using systems linear equations. As another example, these methods can be applied to both systematic and non-systematic codes. As another example, these methods can also be applied during a coding process, for example, when coding using the methods taught in Shokrollahi III for generating systematic codes from non-systematic codes. [0124] In some cases, it is possible to design the encoding process so that decoding methods by permanent deactivation are especially efficient. For example, certainty propagation decoding is known to be computationally efficient whenever it is applied, but it is also known to not provide highly reliable decoding when used alone. When certainty propagation decoding is used within OTF deactivation decoding, the certainty propagation steps can be processed very efficiently, but the OTF deactivation steps interspersed within the certainty propagation steps can slow down decoding, and the more OTF deactivation steps there are, the slower the decoding process. [0125] In typical OTF deactivation modalities, when trying to solve the unknown K + R variables using the linear equation values N + R, the number of OTF deactivation steps is typically higher when N = K, that is, when trying to solve the variables using zero overhead. On the other hand, as N grows and becomes larger than K, it is typically the case that the complexity of OTF deactivation decoding decreases due to the smaller number of OTF deactivation steps, even when N is large enough so that in in some cases, no OTF deactivation step and decoding by deactivation is, or almost, computationally efficient as the decoding of certainty propagation. In other modalities of OTF deactivation decoding, the number of OTF deactivations can remain large even when N is considerably greater than K. [0126] In a preferred mode of permanent decoding decoding, the number P of permanently deactivated variables and the structure of linear equations is designed so that when the LP variables that are not permanently deactivated using OTF deactivation of K + R values of the values are solved. linear equations, the number of OTF deactivation steps during OTF deactivation decoding is small and in some cases equal to zero, and thus the OTF deactivation decoding step is almost as computationally efficient as the propagation of certainty. [0127] In the preferred embodiments, the structure of the linear equations is designed so that the decoding phase by OTF deactivation is almost as efficient as the decoding of certainty propagation. In such preferred modalities, the relationships of the permanently disabled variables to the linear equations is such that the solution phase for the disabled variables, consisting of permanently disabled variables with any OTF disabled variables from the OTF deactivation phase, can be performed. efficiently. Additionally, in the preferred modalities, the structure of the permanently disabled symbols is such that the phase of finalizing the solution of the variables that are not disabled from the solved disabled variables is efficient in terms of computation. Decoding Chain Reaction Codes with Permanent Deactivation [0128] Figure 12 illustrates a matrix representation of a set of variables to be solved using N received coded symbols or equations and R known static symbols or equations by the decoder. The decoder's task is to solve the system of linear equations provided in this figure. Typically, symbols / equations are represented by values stored in the memory or storage accessible by the decoder and the matrix operations described below are implemented by the instructions executable by the decoder. [0129] The matrix illustrated in figure 12 comprises L = K + R columns and N + R rows. The submatrix LT represents the relationships between the N encoded symbols and LP LT symbols of L intermediate symbols determined by the encoding process LT 251. The submatrix PI represents the relationships between N encoded symbols and the PI symbols of the L intermediate symbols determined by the encoding process PI 241, The SE matrix of Equation 1 represents the relationships between the intermediate symbols determined by the static encoder 210. The decoder can determine these relationships based on the keys for the received encoded symbols and from the code construction. [0130] The system of linear equations in figure 12 is solved by row / column swaps of the matrix above using the OTF deactivation methods taught in Shokrollahi II to transform it into a shape illustrated in figure 13. It comprises a lower triangular matrix LO 310, a number of columns comprising matrix 320 (called OTFI) corresponding to OTF deactivations, a matrix 330 PI corresponding to the set of intermediate symbols permanently deactivated or a subset of them, and a matrix 340 EL corresponding to the coded or static symbols not used in the process triangulation leading to the LO matrix. [0131] Figure 14 is a block diagram describing the elements that can carry out a process resulting in the matrix of figure 12. It comprises an LT 347 matrix generator, a PI 349 matrix generator, and a static matrix generator 350. Upon receipt of the keys Ia, Ib, ... the matrix generator LT creates the matrix LT in figure 12, while the matrix generator PI 34 9 creates the matrix PI in figure 12. The concatenation of these two matrices is sent to the matrix generator static 350, which can consider additional tips the static keys S_0, S_1 ... The task of the static matrix generator is the creation of the SE matrix, and its output is the total matrix provided in figure 12. [0132] The operations of the LT 347 matrix generator and the PI 349 matrix generator are slightly coupled to the operations of the LT 250 encoder and PI 240 encoder in figure 15, respectively. The operation of the static matrix generator 350 is the recreation of the SE matrix of equation 1 used for static coding. [0133] The LT 347 matrix generator, the PI 349 matrix generator, and the static matrix generator will now be described in greater detail with reference to the operations that they can perform. [0134] Fig. 16 is a flow chart illustrating a modality 500 of a method employed by the matrix generator LT 347. In step 505, the matrix generator LT 347 initializes an LT matrix of format N x (L-P) for all zeros. Then, in step 510, the keys Ia, Ib, ... are used to generate the weights WL (0), ..., WL (N-1), and the lists AL (0), ..., AL (N-1), respectively. Each of the AL (i) lists comprises WL (i) integers (j (0), ..., j (WL (i) -1)) in the range 0, ..., L-P-1. In step 515, these integers are used to set the LT (i, j (0)), ..., LT (i, j (WL (i) -1)) inputs to 1. As explained above, the LT matrix contributes to a system of equations for unknown terms (C (0), ..., C (L-1)) in terms of received symbols (D (0), ..., D (N-1)). [0135] As can be appreciated by those skilled in the art, the operation of the LT matrix generator as described here is similar to the operation of the LT 251 coding process in figure 6. [0136] Figure 17 is a flow chart illustrating a modality 600 of a method employed by the PI 349 matrix generator. In step 610, the PI 349 matrix generator initializes a PI matrix of N x P format for all zeros. Next in step 615, the keys Ia, Ib, ... are used to generate weights WP (0), ..., WP (N-1), and the lists ALP (0), ..., ALP ( N-1), respectively. Each of the ALP (i) lists comprises WP (i) integers (j (0), ..., j (WP (i) -1)) in the range 0, ..., P-1. In step 620, these integers are used to set inputs PI (i, j (0)), ..., PI (i, j (WP (i) -1)) to 1. The operation of the PI matrix generator is similar to the operation of the PI 241 coding process in figure 5. [0137] As explained above, the matrices LT and PI contribute to a system of equations in the unknown elements (C (0), ... C (L-1)) in terms of received symbols (D (0), ..., D (N-1)). The reason is as follows: once the LT encoder chooses the weight WL (i) and associated list AL (i) = (j (0), ..., j (WL (i) -1)), and the PI coder chooses the weight WP (i) and associated list ALP (i) = (t (0), ..., t (WP (i) -1)), the corresponding coded symbol D (i) is obtained as illustrated bellow. These equations, accumulated for all values of 1 between 0 and N-1, give rise to the desired system of equations represented in Equation 2. D (i) = C (j (0)) ⊕ ... ⊕C (j (WL (i) -1)) ⊕ ... ⊕C (t (WP (i) -1)) Eq. 2 [0138] WL weights can be calculated using a procedure similar to that provided in figure 7. Those skilled in the art, after reviewing this description, will see how to extend this to the chaos in which there are more than two encoders, each operating with a distribution different degree. [0139] A slightly different flowchart of a matrix generator is provided in figure 18. It comprises an LT 710 matrix generator, a static matrix generator 715, and a PI 720 matrix generator. Upon receipt of the keys Ia, Ib, ... , the matrix generator LT 710 creates the matrix LT illustrated in figure 15, while the static matrix generator 715 creates the matrix SE illustrated in figure 15 and can take the additional static keys S_0, S_1, ... as input additional. The concatenation of these two matrices is sent to the PI 720 matrix generator that creates the PI matrix. The operation of the LT 710 matrix generator can be exactly the same as the operation of the LT 347 matrix generator as detailed in figure 16. The operation of the static matrix generator 715 may be different from the operation of the static matrix generator 350 in figure 14. Specifically , figure 19 details an illustrative modality of such an operation. [0140] In step 725, the SE matrix is initialized to 0. In step 730, an input variable S, equal to the number of LDPC symbols, is provided for the process, and the values of SE (i, j) are set to 1 for pairs (i, j) when i = j mod S, i = (1 + pis (j / S)) + j mod S, or i = 2 * (1 + floor (j / S)) + j mod S. In step 735, the positions corresponding to the IS identity matrix in figure 9 are set to one. In step 740, the positions corresponding to a matrix T are provided as an additional input for that step. This matrix can have entries in multiple finite fields, and it can be different for different applications. It can be chosen based on requirements demanded by the code. [0141] Figure 20 is a simplified flowchart illustrating a modality of a method employed by the matrix generator, PI 720. In step 745, the matrix generator PI 349 initializes a PI matrix of format (N + R) xP for all zeros. Then, in step 750, the keys I_a, I_b, ... are used to generate weights WP (0), ..., WP (N-1), and the lists ALP (0), ..., ALP (N-1), respectively. Each of the ALP (i) lists comprises WP (i) integers (j (0), ..., j (WP (i) -1)) in the range of 0, ..., P-1. In step 755, these integers are used to set inputs PI (i, j (0)), ..., PI (i, j (WP (i) -1)) to 1. The operation of the PI matrix generator in figure 20 is similar to the operation of the PI matrix generator of figure 17 with the exception that this matrix generator creates a matrix with R plus rows and is coupled to the matrix in figure 15. [0142] The system of equations in figure 12 or figure 15 is typically sparse, that is, the number of non-zero entries in the matrices involved is typically much less than half of the possible entries. In such a case, the matrices do not need to be stored directly, but an indication can be stored and helps in the re-creation of each individual entry of those matrices. For example, for each row of LT or PI matrices, a process may wish to store the weight and the list of neighbors as computed in figures 5 and 6. Other methods are also possible and many of them can be explained here or in descriptions incorporated by reference here. [0143] Since the matrix generator created a system of equations in the form provided by figure 12 or figure 15, the decoder's task is to solve this system for unknown values of C (0), ... C (L-1). A number of different methods can be applied to achieve this goal, including, but not limited to Gaussian elimination, or any of the methods described in Luby I, Luby II, Shokrollahi I, II, III, IV or V. [0144] A possible method for solving the system of equations in figure 12 or figure 15 is now highlighted with reference to figures 21 to 26. A flow chart of a decoder operation according to some of the modalities of the present invention is provided in figure 21. In the step 1305, the decoding matrix is created using some of the methods described previously. In step 1310, this matrix is rearranged using row and column exchanges. As mentioned above, such a matrix can be obtained from any of the matrices in figure 12 or figure 15 by applying row and column exchanges. Chain reaction decoding in combination with Shokrollahi II OTF decoding can be used to achieve this. Therefore, there are permutations pi operating on the set {0, 1, ..., L01} and tau operating on the set {0, 1, ..., N + R-1} so that the equation in figure 22 is satisfied . [0145] Here, w denotes the number of rows and columns of the LO matrix in figure 13, that is, the number of intermediate symbols that are not permanently disabled, nor OTF. In step 1315, the LO matrix in figure 13 is used to zero from all LO matrix entries below the diagonal. In doing so, the set of symbols on the right of the equation in figure 23 must respect the same operations, so that a new right side of the system of equations is obtained by XORs of some of D (tau (i)). [0146] As illustrated in figure 24, after this operation, matrix 810 becomes an identity matrix, EL matrix in 840 will be untouched, and the OTFI and PI matrices will be changed to OTFI-2 in 820 and 830, since the process of XOR accurate decoding the rows of these matrices together according to the operations that were necessary to reduce the LO matrix to the identity matrix. [0147] A next step in the decoding process can be step 1320, where the rest of the remaining matrix below LO is eliminated to obtain a matrix as shown in figure 25. Denoting the permuted and reduced values of the original symbols D (0) ,. .., D (N_R-1) after this step by E (0), ..., E (N + R-1), by a number of rows in the matrix EL_2, and by g the number of columns in EL_2, the matrix structure in figure 25 results in a smaller system of linear equations u for the values of C (pi (Lg), ..., C (pi (1-1)) according to equation 3. [0148] A coding process as described in figure 21 can solve this system of equations in step 1330 by a variety of means, for example, by using a Gaussian elimination process, or a combination of chain reaction encoding and Gaussian elimination, or by another decryption application by deactivation, or by other means. The Gaussian elimination can be modified in order to separate the computations in GF (2) from those of larger fields, such as GF (256), if the EL matrix has elements belonging to multiple fields, as was taught in Shokrollahi IV, for example. example. [0149] If the system of equations in Equation 3 is not solvable using the processes employed by the decoder, then the decoder can apply measures in step 1335. Such measures may include signaling an error and interrupting the process, or may include requesting more encoded symbols, or you can interrupt the process and return to the application using the decoder a list of intermediate symbols or source symbols that it has been able to retrieve so far. If the system is solvable, then the decoder can retrieve the values of the deactivated intermediate symbols C (pi (Lg), ..., C (pi (L-1)). In some variations, it may also be the case for some others intermediate symbols in addition to the disabled intermediate symbols are retrieved in step 1330. [0150] Once the values of these symbols are retrieved, the decoder proceeds to step 1340 which involves retroactive replacement. The recovery of values of C (pi (Lg), ..., C (pi (L-1)) results in a system of equations of the type provided in figure 26. This system is easier to solve than a general system For example, a decoder can use the process shown in figure 23 to do this The process for obtaining the first vector on the right side of figure 23 can be referred to as retroactive substitution, since it is the process of replacing known symbol values in the system of equations. As can be seen by those skilled in the art after reading this description, the systems provided in figures 23 and 26 are mathematically equivalent. [0151] In figure 23, the decoder obtains the unknown values C (pi (0)), ..., C (pi (Lg-1)) by implementing a process in which the matrix entries on the right side are multiplied by the inputs of the vector already solved C (pi (Lg), ..., C (pi (L-1)) using the matrix multiplication rules, and XORing the inputs obtained with E (0), ..., E (Lg- The XORing process of the inputs obtained with E (0) E (Lg-1) and, thus, the recovery of the values of C (pi (0)), ..., C (pi (Lg-1) )) comprises step 1345 of the decoder of figure 21. [0152] Although useful in some applications, this method can lead to a large computational overhead in some preferred modalities, since the matrix on the right side of figure 23 is typically non-sparse and, therefore, to obtain one of the C elements (pi (j )), a number of XORs need to be performed that is proportional to g. There were some modalities, this number can be large, for example, since the number of permanent deactivations was chosen to be large initially, eg it can be at least as large as P. This can impose severe limitations on the value of P, the number of permanently disabled symbols, and if a lower value of P is used, then this can result in an increase in the number of intermediate OTF disabled symbols. [0153] Figure 27 describes a modified decoding process that can be computationally more efficient than the process described in figure 21. Steps 1405 through 1435 of this process can be the same as the corresponding process steps in figure 14. Optionally, this process can maintain a copy of the original matrix in figure 12, or figure 15, or relevant parts of that matrix, plus original symbols D (0), ..., D (N + R-1) in an additional memory location for future use. This is not necessary for this process to work, but it can lead to additional speed advantages if the application has sufficient memory resources to maintain these copies. Alternatively, the process can keep only one copy of the original symbols D (0), ..., D (N + R-1) and not the matrix, and recreate the matrix when necessary. Step 1440 uses the stored copy of the matrix or undoes the process in step 1415 to get back the original system of the equations in figure 22, or just the top part of that system provided in figure 28. At that point, matrix 1510 provided in figure 29 is sparse, and the values C (pi (w)), ..., C (pi (L-1)) are known, where w = Lg. [0154] As is well known, the right side of the equation in figure 29 can be computed through a computationally efficient process involving a small number of symbol XORs, that is, equal to the number of non-zero entries in the OTFI matrix plus the number of entries nonzero in the PI matrix. This step in the process is denoted by 1445 in figure 27. After this step is completed, the right side of the equation in figure 2.9 was a computer, and a system of equations is to be solved where the unknown elements are the values of C (pi (0)), ..., C (pi (w-1)). This system can be solved in step 1450 using chain reaction decoding, since the lower triangular LO on the right side is sparse, that is, the number of XORs of the symbols to solve this system of equations is equal to the number of inputs nonzero in the matrix LO and that number is typically much less than w * w, the maximum number of nonzero entries possible. Choosing the Number of Permanent Deactivations [0155] Choosing the number of permanent deactivations can affect overall performance, so it can be important. On the other hand, this number needs to be chosen in order to be as large as possible: if this number is large, then the number of OTF deactivations can be reduced to a very small number, sometimes even equal to zero. This is because the combination of the LT and SE matrix in figure 15 (or their corresponding variations in figure 23) is effectively the decoding matrix of a chain reaction code with a large overhead. This fact makes the number of OTF deactivations very small. OTF deactivations can be more difficult to manage in certain modalities, thus, reducing their number can result in advantages in terms of speed and / or memory. [0156] On the other hand, the increase in the number of permanent deactivations can have an adverse effect on the operating time: for example, step 1330 in the decoding process of figure 21, and the corresponding step 1430 in the process of figure 27 require a solution of one system of equations that has at least P rows and columns. One way to do this would be to identify a submatrix that can be inverted from the EL-2 matrix in figure 25, invert that matrix, and use the inverted matrix to obtain the values of the intermediate symbols C (pi (Lg-1) ,. .., C (pi (L-1)). Since the EL-2 matrix may not be sparse in many of the modalities, obtaining intermediate symbol values may incur the order of XORs of gxg symbols. at least P, the number of symbol XORs can be at least P x P, so if the overall number of symbol XORs is kept linear in K, a good choice is to set the number of P to be proportional to the root square of K. The specific modality of Appendix A chooses P to be the order of 2.5 * sqrt (K), and keeps it in line with this observation. This is a good choice for P, since with this choice of P, typically the number of OTF deactivations is very small, ranging from around P to very close to or equal to zero. [0157] Another amount of interest is the average number, I, of disabled intermediate symbol neighbors that exist for a coded symbol, or for a static symbol. Step 1445 of the decoding process in figure 27 may require a maximum of I symbol XORs on average per intermediate symbols not retrieved to perform this step. If I is large, then that number of XORs may be too much for the memory and computing resources of the processes running the decoding or encoding process. On the other hand, if I is very small, then the EL-2 matrix in figure 25 may not have a total rating, and decoding capacity may be impaired. [0158] A more detailed analysis reveals that an important aspect of permanent deactivation is to make the PI matrix of figure 15 behave in such a way that the columns are linearly independent of each other, that is, the matrix is fully classified as much as possible. It is well known to those skilled in the art that if PI is a random binary matrix, then the total classification of the possible limits can be achieved. On the other hand, PI can have on average in each column a fraction of 1 that is inversely proportional to the square root of K and still satisfy the same classification properties as those of a purely random matrix. For this reason, the specific modality in Appendix A chooses I to be a number between 2 and 3, and thus, with the choice of P proportional to the square root of K, this means that the number of 1 in each PI column is, on average, inversely proportional to the square root of K. [0159] There are many variations to these methods, as those skilled in the art will recognize after reading this description. For example, XOR can be replaced by other operators, for example, linear operators using larger finite fields, or operators can be one of different operators, for example, some linear operators using larger finite fields for some of the operations and other linear operators through smaller finite fields for other operations. Specific Example with Reference to Appendix A [0160] As detailed above, with no permanent deactivations (that is, predetermined decisions as to which encoded symbols are not part of a matrix manipulation that would be part of determining a sequence for a chain reaction decoding), the number of OTF deactivations may well be random and cause potential problems in terms of memory consumption. Where the number of source symbols is very large and the overhead is very small, the probability of error may be unacceptably close to 1. [0161] Due to the high probability of error for small overheads, it can become increasingly difficult to find good systematic information when the number and source symbols are large. Here, systematic information refers to the information needed to provide the encoder and decoder in order to be able to build systematic code in the sense of Shokrollahi III. In addition, whenever the systematic information is obtained, it is expected that the behavior of the code is very distant from its average behavior, since in "medium the code must fail in zero overhead. [0162] Some of the parameters for building a chain reaction code with permanent deactivation may include the degree distribution Ω used for the LT 250 encoder in figure 4, the parameters for the PI 240 encoder, the determination of the number of permanently deactivated symbols, determining the number of redundant static symbols and their structure, and random numbers in a particular way can be generated and shared between encoder 115 and decoder 155 in figure 1. Encoders and Decoders Using RQ Code [0163] A preferred modality of a code, hereinafter referred to as "RQ code", which uses the methods described here is specified in more detail in Section 5 of Annex A. The remainder of Annex A describes a method of applying the QC code to the reliable distribution of objects through broadcast and multicast networks. [0164] The RQ code uses the methods described above and below to implement a systematic code, meaning that all source symbols are among the encoded symbols that can be generated, so that the encoded symbols can be considered as a combination of original source symbols. and repair symbols generated by the encoder. [0165] Although some of the previous codes have good properties, there are some improvements that increase their practical application. Two potentially important improvements are a steeper overhead failure curve and a greater number of source symbols supported per source block. The overhead is the difference between the number of encoded symbols received and the number of source symbols in the source block, for example, an overhead of 2 means that K + 2 encoded symbols are received to decode a source block with K source symbols. The probability of failure in a given overhead is the probability that the decoder will fail to completely recover the source block when the number of encoded symbols received corresponds to that overhead. The overhead failure curve is a representation of how the probability of failure falls as a function of the overhead increase, starting at zero overhead. An overhead failure curve is best if the decoder failure probability drops quickly, or steeply, as a function of the overhead. [0166] A random binary code has an overhead failure probability curve where the failure probability essentially falls by a factor of two for each additional overhead symbol, with intractable computational complexity, but the subject of this discussion is limited to the probability curve of overhead failure, not computational complexity. In some applications, this is a sufficient overhead failure curve, but for some other applications, a steeper overhead failure curve is preferred. For example, in a sequencing application, the range of number of source symbols in a source block can be wide, for example, K = 40, K = 200, K = 1,000, K = 10,000, To provide a good sequencing experience the failure probability may need to be low, for example, a 10-5 or 10-6 failure probability. Since bandwidth is often of paramount importance for sequencing applications, the percentage of repair symbols sent as a fraction of the source symbols should be minimized. Supposing, for example, that the network through which the sequence is sent must be protected against up to 10% of packet loss when using source blocks with K = 200, and the probability of failure should be a maximum of 10-6 . A random binary code requires an overhead of at least 20 to achieve a 10-6 probability of failure, that is, the receiver needs 220 encoded symbols to decode with that failure probability. A total of 245 encoded symbols must be sent for each source block to meet the requirements, since the ceiling (220 / (1-0.1)) = 245. In this way, the repair symbols add 22.5% more to the bandwidth requirements for the sequence. [0167] The RQ code described here and in Section 5 of Annex A achieves a failure probability that is less than 10-2, 10-4, and 10-6 for overheads of 0, 1 and 2, respectively, for K = K values 'for all supported values of K' and for values of K = 1 and K = K '+' for all but the final supported value of K. Tests were performed for a variety of loss probabilities, for example, loss of 10%, 20%, 50%, 70%, 90% and 95%. [0168] For the example above using the RQ code, an overhead of 2 is sufficient to achieve a failure probability of 10-6, so that only a total of 225 encoded symbols need to be sent for each source block to match the requirements, since ceiling (202 / (1-0.1)) = 225. In this case, the repair symbols add 12.5% more to the bandwidth requirements for the sequence, that is, 10% less bandwidth overhead than required by a random binary code. Thus, the improved RQ code overhead failure curve has some positive practical consequences. [0169] There are applications where support for a large number of source symbols per source block is desired. For example, in a mobile file diffusion application, it is advantageous from a network efficiency point of view if you encode the file as a single source block or, more generally, if you split the file into a few source blocks as it's practical. Supposing, for example, that a file of 50 million bytes must be broadcast, and that the size available within each packet to carry an encoded symbol is one thousand bytes. To encode the file as a single source block, a value of K = 50,000 must be supported. (Note that there are sub-block techniques as previously described that allow decoding using substantially less memory). [0170] There are few reasons why the number of source symbols supported by a code can be limited. A typical reason is that computational complexity becomes unreasonable as K increases, as do Reed-Solomon codes, but this is not the case for codes such as chain reaction codes. Another reason may be that the probability of failure with zero overhead increases to almost 1 as K increases, making it more difficult to find systematic indices that result in good systematic code construction. The probability of failure with zero overhead can dictate the difficulty of deriving a good code construction, in view of this essentially the probability that when a systematic index is chosen at random that the resulting systematic code construction has the property of the first symbols K encoded are able to decode the K source symbols. [0171] Since the overhead failure curve for the RQ code design is so steep for all K values, it is easily possible to find good systematic indices and thus support many of the larger K values. The RQ code as described in Section 5 of Annex A supports K values up to 56,403, and also supports a total number of encoded symbols up to 16,777,216 per source block. These limits on the supported values for the RQ code are set due to practical considerations with. based on perceived application requirements, and not due to the limitations of the RQ code design. Other modalities than those illustrated in Annex A may have different values. [0172] The RQ code limits the number of different source block sizes that are supported as follows. According to a source block with K source symbols to be encoded and decoded, a K 'value is selected based on the table illustrated in Section 5.6 of Annex A. The first column in the table lists the possible K' values. The value of K 'selected is the smallest value among the possibilities so that K ≤ K'. The symbols K, C '(0), ..., C' (K-1) are filled with symbols K'-K, C '(K), ..., C' (K'-1) with values set to zero to produce a source block comprising K 'source symbols C' (0), ..., C '(K'-1), and then encoding and decoding are performed on that filled source block. [0173] The above approach has the benefit of reducing the number of systematic indices that need to be supported, that is, only a few hundred instead of tens of thousands. There are no disadvantages in terms of the overhead failure probability for K, since it is equal to the overhead failure curve for the selected K ': According to the K value, the decoder can compute the K' value, and configure the values of C '(K), ..., C' (K'-1) to zeros, and thus, you only need to decode the remaining K symbols from the K 'source symbols of the source block. The only potential disadvantages are that slightly more memory or computational resources may be required for encoding and decoding with slightly more source symbols. However, the spacing between consecutive K 'values is approximately 1% for larger K' values, and therefore, the potential disadvantage is negligible. [0174] Due to the filling of the source block from K to K ', the identifier for coded symbols C' (0), C '(1), ... within the RQ code is called the Internal Symbol Identifier, abbreviated as ISI, where C '(0), ..., C' (K'-1) are the source symbols and C '(K'), C '(K' + 1), ... are the repair symbols. [0175] External applications employing the encoder and decoder use an Encoded Symbol Identifier, also called an Encoding Symbol Identifier, abbreviated as ESI, which ranges from 0 to K-1 to identify the original source symbols C '(0), ... , C '(K-1) and continuing K, K = 1, ... to identify the repair symbols C' (K '), C' (K '+ 1), ... Thus, a symbol repair C '(X) identifies with ISI X within code RQ is identified externally with an ESI X- (K'-K). This is described in more detail in Section 5.2.1 of Annex A. [0176] The encoding and decoding for the RQ codes is defined by two types of relations: restricted relations between the intermediate symbols and LT-PI relations between the intermediate symbols and the encoded symbols. The restricted relationships correspond to the relationships between the intermediate symbols defined by the SE matrix as, for example, illustrated in figure 12 or figure 15, The LT-PI relationships correspond to the relationships between the intermediate symbols and the coded symbols defined by the LT matrix and the matrix PI as, for example, illustrated in figure 12 or figure 15. [0177] Coding proceeds by determining the intermediate symbol values based on: (1) the source symbol values; (2) the LT-PI relationships between the source symbols and the intermediate symbols; and (3) the constraint relations between the intermediate symbols. The values of the repair symbols can be generated from intermediate symbols in the LT-PI relationships between the intermediate symbols and the repair symbols. [0178] Similarly, decoding proceeds by determining the intermediate symbol values based on: (1) the received symbol values; (2) the LT-PI relations between the received coded symbols and the intermediate symbols; and (3) the restricted relationships between the intermediate symbols. The values of the missing source symbols can be generated from intermediate symbols based on the LT-PI relationships between the intermediate symbols and the missing source symbols. Thus, encoding and decoding are essentially symmetrical procedures. Illustrative Hardware Components [0179] Figures 30 and 31 illustrate hardware block diagrams that can be used to implement methods described above. Each element can be hardware, program code or instructions executed by a general purpose or custom purpose processor or a combination of them [0180] Figure 30 illustrates an illustrative coding system 1000, which can be implemented as hardware modules, software modules, or parts of program code stored in a program store 1002 and executed by a processor 1004, possibly as a collective unit of not separate codes as illustrated in the figure. The coding system 1000 receives an input signal, carrying the source symbols and parameter information, and sends a signal carrying that information. [0181] An input interface 1006 stores the input source symbols in a source symbol store 1008. A source symbol generator for intermediate 1010 generates intermediate symbols from source symbols. This can be a pass in some modalities and a decoder module in other modalities (such as a "systematic" modality). [0182] A 1012 redundant symbol generator generates redundant symbols from the source symbols. This can be implemented as a chain reaction encoder, an LDPC encoder, an HDPC encoder or similar. A 1014 deactivator receives source symbols, intermediate symbols and / or redundant symbols, as appropriate, and stores some of them, the symbols permanently deactivated, in a PI 1018 store and supplies the others to a 1016 output encoder. This process it can only be logical, rather than physical. [0183] An operator 1020, such as an XOR operator, operates on one or more of the symbols encoded from the output encoder 1016 (one, in certain embodiments) and one or more PI symbols from the PI store 1018 (one, in certain embodiments ), and the result of the operation is provided for a 1030 transmission interface that sends the signal from system 1000. [0184] Figure 31 illustrates an illustrative decoding system 1100, which can be implemented as hardware modules, software modules, or parts of program code stored in a program store 1102 and executed by a processor 1104, possibly as a collective unit of not separate codes as illustrated in the figure. Part of the process can only be implemented logically, rather than physically. [0185] The 1100 de-coding system collects an input signal and possibly other information and sends source data, if it is able to do so. The input signal is provided to a receiving interface 110 6 which stores received symbols in a storage 1108. ESIs of received symbols are provided to a matrix generator 1110 that generates matrices as described here, depending on the particular symbols received, and stores the results in an 1112 matrix memory. [0186] A programmer 1114 can read details of the matrix from the memory of the matrix 1112 and generates a program, stored in a programming memory 1016. Programming 1114 can also generate a done signal and port a PI matrix to a PI 1118 solver when complete . The PI 1118 solver provides the PI symbol values solved for a 1120 solver, which also uses programming, to decode intermediate symbols from received symbols, programming and PI symbols. [0187] The intermediate symbols are provided for an intermediate symbol generator for source 1122, which can be an encoder or a pass. The output from the intermediate token generator to source 1122 is provided to an output interface 1124 that sends the source data, or any source data that is available to the output. Other Considerations [0188] In certain situations, there may be a need for improved decoding capability. In the examples provided elsewhere here, while the encoded symbols featured both LT neighbors and PI neighbors, LDPC symbols had only LT neighbors or PI neighbors that were not found among HDPC symbols. In some cases, the decoding capability is enhanced if the LDPC symbols also have PI neighbors that include HDPC symbols. With neighbors among all PI symbols, including HDPC symbols, the decoding of LDPC symbols should be more similar to that of encoded symbols. As explained elsewhere here, symbols that depend on LT symbols (which can be easy to encode and decode) and also depend on PI symbols, including HDPC symbols (which can provide highly reliable decoding), so that both advantages are present. [0189] In one example, each LDPC symbol has two PI neighbors, that is, an LDPC symbol value depends on the values of two PI symbols. [0190] The decoding capability can also be improved, in some situations, by reducing the occurrence of duplicate encoded symbols, where two encoded symbols are duplicated if they have exactly the same set of neighbors in general, where the general set of neighbors for a coded symbol is constituted set of neighbors LT and set of neighbors PI. Duplicate encoded symbols with the same set of general neighbors carry exactly the same information about the intermediate source block from which they were generated, and therefore there is no better chance of decoding than having received more than one of the duplicated encoded symbols than of having received one of the duplicated encoded symbols, that is, the reception of more than one duplicated encoded symbol adds to the reception overhead and only one of the symbols encoded between the duplicates is useful for decoding. [0191] A preferable property is that each received encoded symbol is not a duplicate of any other received encoded symbol, as this means that each received encoded symbol can be useful for decoding. Thus, it may be preferable to reduce the number of such duplications or to reduce the likelihood of duplicates occurring. [0192] One approach is to limit the number of LT neighbors that each encoded symbol can have. For example, if there are W possible neighbors, the maximum number of neighbors can be limited to W-2. This reduces the chance of general neighbor sets being duplicated in some cases, as the set of neighbors comprising all possible neighbors W may not be allowed. Where the constraint is Deg [v] = min (d, W-2), there are W * (W-1) / 2 different sets of neighbors of grade W-2. Thus, it may be less likely that sets of duplicate generational neighbors will be generated for encoded symbols. Other restrictions, such as min (d, W-Wg) for some Wg in addition to Wg = 2, or some other restriction, can be used. [0193] Another technique, which can be used alone or with the duplicate reduction technique above, is to choose more than one PI neighbor for each encoded symbol, so that it is less likely that there will be duplicates of PI neighbors for the encoded symbols, and , thus, less likely that duplicate general neighbor sets will be generated for encoded symbols. PI neighbors can be generated similarly to how LT neighbors are generated, for example, by first generation a (d1, a1, b1), as illustrated in Annex A, Section 5.3.5.4 according to the snippet of code below: if (d <4) then {d1 = 2 + Rand [y, 3, 2]} or {d1 = 2}, a1 = 1 + Ran [y, 4, P1-1]; b1 = Rand [y, 5, P1]; [0194] Note that in this example, there is a non-trivial random degree distribution defined in the number of neighbors PI dl and that the distribution depends on the chosen number of neighbors LT d, and the number of neighbors PI is likely to be greater when the number of neighbors LT it's smaller. This provides the property that the overall degree of encoded symbol is such that it reduces the chance that duplicate encoded symbols will be generated and thus received. [0195] The encoded symbol value must be generated using the neighbors defined by (d1, a1, b1) as illustrated in Appendix A, Section 5.3.5.3, and by the following code snippet: while (b1> = p) do {b1 = (b1 + a1)% P1}; result = result ^ C (W + b1]; for j = 1, ..., d1-1 do b1 = (b1 + a1)% P1; while (b1> = P) do {b1 = (b1 + a1)% P1}; result = result ^ C [W + b1], Return result; [0196] To support these decoding capabilities or separately to provide decoding capabilities, a different systematic index J (K ') for K' values can be used, such as one illustrated in Table 2 of Section 5.6 in Annex A. [0197] An example of a process that is carried out in a transmission and / or reception system to generate a systematic index J (K ') is illustrated as follows. For each K 'in the list of possible K', a process that can be carried out, typically by a properly programmed circuit or processor, is the verification of a number of indexes for adequacy. For example, the circuit / processor should check, for J = 1 ... 1000 [or some other limit], that the following criteria have been met with respect to the possible systematic index J: (a) Is decoding possible with zero overhead from source symbols K ' If the answer is Yes, record the number of OTF deactivations (b) Are there sets of duplicate general neighbors between the first possible K '/ 0.06 encoded symbols (with ESIs 0, ..., K' / 0.06) [Other limits can be used]. (c) Decoding failure probability is below 0.007 [or some other limit] when decoding using the first K 'encoded symbols received within 10,000 runs [or some other test] when each encoded symbol is lost with a probability of 0 , 93 [or some other limit] in each round regardless of other encoded symbols [0198] The circuit / processor then chooses among the possible systematic indices J that satisfy criteria (a), (b) and (c) above, choosing the systematic index that recorded an average number of OTF deactivations in step (a). [0199] Note that there are many variations of the selection criteria above. For example, in some cases it may be preferable to choose the systematic index that satisfies (a), (b) and (c) above and results in fewer decoding failures in step (c) within the specified number of rounds. As another example, a combination of the number of OTF deactivations and the probability of decoding failure can be taken into account when choosing a systematic index. As another example, multiple systematic indices for each K 'value may be available, and then one of them is chosen at random within particular applications. [0200] The systematic indices for the K 'values listed in Table 2 in Section 5.6 of Annex A represent a potential list of systematic indices for the code described in Annex A. Variations of a Sub-Block Process [0201] The formation of sub-blocks, division of blocks into smaller units, physically or logically, for further processing, is known for several reasons. For example, it is used in RFC IETF 5053. Also, it is known from U.S. Patent No. 7,072,971. One of the basic uses of the subblocking method is to allow a large block of data to be protected as a single entity by an FEC code, while at the same time using a much smaller amount of memory than the block size data in a receiver to retrieve the data block using an FEC decoder. [0202] A method for choosing the number of sub-blocks described in RFC IETF 5053 provides a good source block split and a sub-block split for many reasonable parameter settings, but may produce a solution in some circumstances that may not strictly satisfy an upper limit on the size of the WS sub-block (although even in these cases it produces solutions where the size of the sub-block is a modest factor greater than the determined WS restriction on the size of the sub-block). As another example, in draft-luby-rmt-bb-fec-raptorg-object-00 (where the maximum number of source symbols in a source block is much greater than in RFC IETF 5053), in Section 4.2, the recipe below is provided to calculate T, Z. and N, where T is the symbol size, Z is the number of source blocks into which the file (or data block) is divided, and N is the number of sub-blocks. Beyond. addition, P 'is the size of the packet payload for symbols, F is the file size in bytes, K'_max is o. maximum number of supported source symbols (eg 56,404), A1 is an alignment factor specifying that symbols or subsymbol must be multiples of A1 bytes in size to allow for more efficient decoding, for example, A1 = 4 for a modern CPU is preferred, and WS is the desired upper limit on the size of the sub-block in bytes. [0203] Note that the derivation of parameters T, Z and N can be done on a sender or on an alternative server based on the values of F, A1 and P '. The recipient only needs to know the values of F, A1, T, Z and N in order to determine the source block or sub-block structure of the file or data block in the received packets belonging to the file or data block. The receiver can determine P 'from the size of the received packets. Note that packets sent and received also typically contain other information that identifies the contents of the packet, for example, a FEC payload ID that is typically 4 bytes in size and that carries the source block number (SBN), and the ESI of the first symbol ported in the package. [0204] A previous method described in Section 4.2 of draft-luby-rmt-bb-fec-raptorg-object-00 to calculate T, Z, N is to set them to the following values: T = P ' Kt = ceiling (F / T) Z = ceiling (Kt / K'_max) N = min {ceiling (ceiling (Kt / Z) * T / WS), T / A1} [0205] In these calculations, ceiling () is a function that sends the smallest integer greater than or equal to its input, and floor () is a function that sends the largest integer less than or equal to its input. In addition, min (), is a function that sends the minimum of your inputs. [0206] A problem for some parameter configurations with this form of source block derivation and division of sub-blocks is that if T / A1 is less than ceiling (ceiling (Kt / z) * T / WS), then the upper limit of the size sub-block W was not respected. [0207] A potential secondary problem is that this allows sub-symbols to be as small as A1, which is typically set to 4 bytes, and may be too small to be efficient in practice. Typically, the smaller the sub-symbol size, the more processing overhead there is to decode or encode sub-blocks. Additionally, especially on a receiver, a smaller subsymbol size means that more sub-blocks need to be demultiplexed and decoded, and this can consume receiver resources such as CPU cycles and memory access. On the other hand, a smaller allowed sub-symbol size means that a source block can be divided into more sub-blocks that respect a specified upper limit WS in the sub-block size. In this way, smaller sub-blocks allow a larger source block to be supported, and in this way, the FEC protection provided through that source block results in better protection and better network efficiency. In practice, in many cases, it is preferable to ensure that the sub-symbols have at least a specified minimum size, which provides the opportunity for a better balance between processing requirements and memory requirements at a receiver and the efficient use of network. [0208] As an example of the derived parameters using the previous method described in Section 4.2 of draft-luby-rmt-bb-fec-raptorg-object-00 to calculate T, Z, N: F = 56,404 KB P '= 1 KB = 1,024 bytes WS = 128 KB A1 = 4 K'_max = 56,404 Calculations: T = 1 KB Kt = 56,404 Z = 1 N = 256 (due to the second entry for the min function) [0209] In this example, there is a source block, comprising 256 sub-blocks, where each sub-block is approximately 220 KB (larger than WS) with at least a few sub-blocks having the size of the 4-byte sub-symbol (extremely small). [0210] A third problem is that an A1-FEC solution may not support all possible numbers of source symbols, that is, it can support only a selected list of values K ', where K' is a supported number of source symbols in a source block, and then if the actual number of desired K source symbols in a source block is not between K 'values then K is filled to the next K' value, which means that the size of the source block that is used may be slightly larger than the K value calculated from the above. [0211] New methods of forming sub-blocks are now described, which are improvements to the previous methods described above. For purposes of description, a module for the formation of sub-blocks can consider as its input data to be divided, F and values including WS, A1, SS and P ', where the meaning of these variables is described in greater detail below. [0212] WS represents a constraint provided in the maximum size sub-block, possibly in units of bytes, which is decodable in the memory operating at a receiver. A1 represents a memory alignment parameter. Since a receiver memory can work more efficiently if the symbols and subsymbols are aligned in memory along the memory alignment limits, it can be useful to track A1 and store values in multiples of A1 bytes. For example, typically A1 = 4, as many memory devices naturally address data in memory at limits of four bytes. Other values of A1 are also possible, for example, A1 = 2 or A1 = 8. Typically, A1 can be configured for the least common multiple memory alignment among all the many possible receivers. For example, if some receivers support 2-byte memory alignment, but other receivers support 4-byte memory alignment, then A1 = 4 would be recommended. [0213] The SS parameter is determined based on the preferred lower limit on the size of the subsymbol, so that the lower limit on the size of the subsymbol is SS * A1 bytes. It may be preferable to have the size of the subsymbol as a multiple of A1, since decoding operations are typically performed on the subsymbols. [0214] The following is a detailed explanation of a method of dividing F data into Z source blocks and then dividing these Z source blocks into N sub-blocks. In this description, P 'refers to a variable stored in memory (or implied) representing the variable bytes within packages for symbols that are to be sent, and P' is considered to be a multiple of A1. T is a variable representing the size of the symbols that must be placed inside the packages sent. Other variables can be inferred from the text. New Method of Formation of Sub-blocks for Determination of T, Z and N. [0215] T = P ' Kt = ceiling (F / T) N_max = floor (T / (SS * A1)); For all n = 1, ... N_max ° KL (n) is the maximum K 'value supported as a possible number of source symbols in a source block that satisfies K'≤WS / (A1 * ceiling (T / (A1 * n)))) Z = ceiling (Kt / KL (N_max)) N = minimum n so that ceiling (Kt / Z) ≤KL (n) [0216] Once these parameters have been determined, then the size of each of the Z source blocks, and the sizes of the subscripts of the N sub-blocks of each source block can be determined as described in RFC IETF 5053, that is, Kt = ceiling ( F / T), (KL, KS, ZL, ZS) = Division [Kt, Z], and (TL, TS, NL, NS) = Division [Τ / A1, N]. [0217] Kt is the number of source symbols in the file. In the sub-block module, the Kt source symbols are divided into Z source blocks, ZL source blocks with KL source symbols each and ZS source blocks with KS source symbols each. Then, KL is rounded up to KL ', where KL' is the smallest supported number of source symbols that is at least KL (and zero-filled symbols KL'-KL are added to the source block for encoding and decoding purposes , but these additional symbols are typically neither sent nor received), and, similarly, KS is rounded up to KS ', where KS' is the smallest supported number of source symbols that is at least equal to KS (and symbols of padding equal to zero KS'-KS that are added to the source block for encoding and decoding fans, but these additional symbols are typically neither sent nor received). [0218] These calculations (performed by the sub-block module, another software module, or hardware) ensure that the numbers of the source symbols for the source blocks are as equal as possible, subject to the restriction of their numbers totaling the number, Kt, of the symbols font in the file. These calculations also ensure that the sizes of the sub-symbols for the sub-blocks are as equal as possible subject to the restriction that they are multiples of A1 and that their sizes total the size of the symbol. [0219] Then, the TL, TS, NL and NS sub-symbol parameters are calculated, where there are NL sub-blocks that use the largest sub-symbol size TL * A1 and there are NS sub-blocks that use the smallest sub-symbol size TS * A1, A Division function [I, J] is implemented in software or hardware and is defined as a function with an output that is a sequence of four integers (TL, IS, JL, JS) where IL = ceiling (I / J), IS = floor (I / J), JL = I-IS * J, and JS = J-JL. [0220] Some of the properties of these new methods deserve attention. A sub-block module can determine a lower limit derived from the smallest sub-symbol size used. From the equations above, it is known that TS = floor (T / (A1 * N)), where TS * A1 is the smallest sub-symbol size used since TS≤TL. Note that the smallest subsymbol is used when N = N_max. The use of X / (floor (Y)) ≥X / Y for positive X and Y, TS is at least floor (T / (A1 * floor (T / (SS * A1)))), which is, in turn, at least floor (S3) = SS. Due to these facts, the smallest sub-symbol size produced by the division method described here will be at least TS * A1 = SS * A1, as desired. [0221] A sub-block module can determine an upper limit derived from the largest sub-block size. The largest sub-block size used is TL * A1 * KL ', where KL' is the smallest K-value in the table above which is at least KL = ceiling (Kt / Z). Note that, by the definition of N, KL'≤KL (N), and TL = ceiling (T / (A1 * N)). Since KL (N) ≤WS / (A1 * ceiling (T / (AL * N)))), it follows that WS≥KL (N) * A1 * ceiling (T / (A1 * N ')) ≥ KL '* A1 * TL. [0222] A variable N_max can represent the largest number of sub-symbols into which a font symbol of size T can be divided. The setting of N_max for flooring (T / (SS * A1)) ensures that the smallest sub-symbol size is at least SS * A1. KL (n) is the largest number of source symbols in a source block that can be supported when the symbols in the source block are divided into n subsymbols each, to ensure that each of the sub-blocks of the source block is at most WS. [0223] The Z number of source blocks can be chosen as small as possible, subject to the restriction that the number of source symbols in each source block is at most equal to KL (N_max), which ensures that each source symbol can be divided into sub-symbols the size of at least SS * A1 and that the resulting sub-blocks have the size of at most WS. The sub-block module determines, from the value of Z, the number of source blocks and the number of symbols in each of the Z source blocks. [0224] Note that if any value less than Z is used than that produced by this method of division, then there will be a sub-block of one of the source blocks that will be greater than WS or there will be a sub-block of one of the source blocks which will have a sub-symbol size smaller than SS * A1. In addition, the smallest of the source blocks that this method of division produces is the largest possible subject to these two restrictions, that is, there is no other method to divide the file or block of data into the source blocks that respects both restrictions so that the smallest source block is greater than the smallest source block produced by this division method. Thus, in this sense, the Z value produced by this method of division is ideal. [0225] The number N of sub-blocks in which a source block is divided can be chosen to be the smallest possible subject to the constraint that, for each sub-block, the size of the sub-symbols of the sub-block multiplied by the number of source symbols in the source block that the sub-block divides is at most WS. [0226] Note that if any value less than N is used than that produced by this method of division from the value of Z, then there will be at least one sub-block whose size will exceed WS. In addition, the smallest sub-symbol size that this division method produces from the determined value of Z is as large as possible subject to the restriction that the largest sub-block size cannot exceed WS, that is, there is no other method for produce sub-blocks of the source blocks determined by the value of Z that respect the constraint of the larger sub-block so that the smaller size of the sub-symbol is greater than the smallest size of the sub-symbol produced by this method of division. Thus, in this sense, the value of N produced by this method of division is ideal. [0227] In the following examples, all possible values of K 'are considered to be supported as a number of source symbols in a source block. Example 1 [0228] Appetizer: SS = 5 A1 = 4 bytes (minimum subscript size = 20 bytes) WS = 128 KB = 131,072 bytes P '= 1,240 bytes F = 6 MB = 6,291,456 bytes Calculations: T = 1240 bytes Kt = 5,074 N_max = 62 KL (N_max) = 6,553 Z = 1 KL = ceiling (Kt / Z) = 5.074 N = 52 (KL (N) = 5,461) TL = 6 major subsymbol = 24 bytes TS = 5, minus subsymbol = 20 bytes TL * A1 * KL = 121,776 Example 2 [0229] Appetizer: SS = 8 A1-4 bytes (such minimum subscript = 32 bytes) WS = 128 KB = 131,072 bytes P '= 1 KB = 1,024 bytes F = 56,404 KB = 57.'757,696 bytes Calculations: T = 1,024 bytes Kt = 56,404 N_max = 32 KL (N_max) = 4,096 Z = 14 KL = ceiling (kt / Z) = 4,029 N = 32 (KL (N) = 4,096) TL = 8, largest subsymbol, 32 bytes TS = 8, smallest subsymbol, 32 bytes 5 TL * A1 * KL = 128,928 [0230] There are many variations of the above methods. For example, for some FEC codes it is desirable to have at least a minimum number of source symbols in a source block to minimize the overhead of receiving the source block from the FEC code. Since for really small file sizes or F data block sizes the font symbol size can become very small, there can also be a maximum number of font symbols into which a package can be divided. For example, in RFC IETF 5053, the minimum number of source symbols in a source block is Kmin = 1024 and the maximum number of source symbols into which a packet is divided is Gmax = 10. [0231] Below is another variation of the new method of forming sub-blocks described above that takes into account the additional parameters Kmin and Gmax as just described, where G is the number of symbols for a source block ported in each package, performed by a module formation of sub-blocks or more generally some module or software or hardware in an encoder, decoder, transmitter and / or receiver. [0232] In this variation, each package carries the ESI on the first symbol in the package and then each subsequent symbol on the package implicitly has an ESI that is one greater than the previous symbol on the package. New Method of Formation of Sub-Blocks for Determination of G, T, Z and N [0233] G = min (ceiling (P '* Kmin / F), floor (P' / (SS * A1)), Gmax); T = floor (P '/ (A1 * g)) * A1 Kt = ceiling (F / T) N_max = floor (T / (SS * A1)) For all n = 1, ..., Ν_max ° KL (n is the maximum K 'value supported with a possible number of source symbols in a source block that satisfies K '≤ WS / (Al * (ceiling (T / (A1 * n)))) Z = ceiling (Kt / KL (N_max)) Minimum N = n so that ceiling (Kt / Z) ≤KL (n) [0234] Note that by the way in which G is calculated, the symbol size is guaranteed to be at least SS * A1, that is, the symbol size is at least the minimum subsymbol size. Note also that it must be the case that SS * A1 is at least P 'to ensure that the symbol size can be at least SS * A1 (and if not, then G will evaluate to zero). Example 3 [0235] Input: SS = 5 A1 = 4 bytes (minimum subsystem size = 20 bytes) WS = 256 KB = 262,144 bytes P '= 1,240 bytes F = 500 KB = 512,000 bytes Kmin = 1,024 Gmax = 10 Calculations: G = 3 T = 412 Kt = 1,243 N_max = 20 KL (N_max) = 10,992 Z = 1 KL = ceiling (kt / Z) = 1,243 N = _2 _ (, Kl (n) = 1,260) TL = 52, largest sub-symbol = 208 bytes TS = 51, smaller subsymbol = 204 bytes TL * A1 * KL = 258,544 [0236] As described, these new methods introduce a restriction on the smallest sub-symbol size used for any sub-block. This description provides new methods for forming sub-blocks that respect this additional constraint, while at the same time strictly respecting a constraint on the maximum sub-block size. The methods produce solutions for the formation of source blocks and sub-blocks that satisfy the objectives of dividing a file or block of data into the minimum possible source blocks subject to a restriction in the smallest sub-symbol size, and then subject to that division in minimum possible sub-blocks subject to a restriction on the maximum sub-block size. Variations [0237] In some applications, it may be acceptable not to be able to decode all source symbols, or to be able to decode all source symbols, but with a relatively low probability. In such applications, a receiver may stop trying to decode all source symbols after receiving K + A encoded symbols. Or, the receiver may stop receiving encoded symbols after receiving less than K + A encoded symbols. In some applications, the receiver may even receive only K or less encoded symbols. Thus, it should be understood that in some embodiments of the present invention, the desired degree of precision does not have to be the complete recovery of all source symbols. [0238] Additionally, in some applications where recovery recovery is acceptable, data can be encoded so that all source symbols cannot be recovered, or so that complete recovery of source symbols requires the receipt of too many encoded symbols more than the number of source symbols. Such coding would generally require less computation spend, and could therefore be an acceptable way to reduce the computational cost of coding. [0239] It is understood that several functional blocks in the figures described above can be implemented by a combination of hardware and / or software, and that in specific implementations some or all of the functionality of some of the blocks can be combined. Similarly, it is also understood that the various methods taught here can be implemented by a combination of hardware and / or software. [0240] The above description is illustrative and not restrictive. Many variations of the invention will become apparent to those skilled in the art upon review of that description. The scope of the invention must, therefore, be determined not with reference to the above description, but, instead, it must be determined with reference to the appended claims together with the full scope of the equivalences. Annex A [0241] M.Luby Reliable Multicast Transport Draft - Qualcomm Incorporated Internet Intended Situation: Track Patterns A. Shokrollahi Expiration: February 12, 2011 EPFL M. Watson Qualcomm Incorporated T. Stickhammer Nomor Research L. Minder Qualcomm Incorporated August 11, 2010 RaptorQ Advance Error Correction Scheme for Object Distribution draft-ietf-rmt-bb-fec = -raptorq-04 Summary [0242] This document describes a Fully Specified FEC scheme, corresponding to FEC Coding ID 6 (to be confirmed (tbc)), for the RaptorQ advance error correction code and its application to the reliable distribution of data objects. [0243] RaptorQ codes are a new family of codes that provide superior flexibility, support for larger source block sizes and better coding efficiency than the Raptor codes in RFC5053. RaptorQ is also a source code, that is, as many encoding symbols as necessary can be generated by the encoder immediately from the source symbols of a data source block. The decoder can retrieve the source block from any set of encoding symbols for most cases equal to the number of source symbols and in rare cases with slightly more than the number of source symbols. [0244] The RaptorQ code described here is a systematic code, meaning that all source symbols are among the coding symbols that can be generated. Situation of this Memo [0245] This Internet draft is submitted in full compliance with the provision of BCP 78 and BCP 79. [0246] Internet drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups can also distribute working documents such as Internet Drafts. The list of current Internet drafts can be found at http://datatracker.ietf.org/drafts/current/. [0247] Internet drafts are draft documents that are valid for a maximum of six months and can be updated, replaced or obsoleted by other documents at any time. It is inappropriate to use Internet Drafts as reference material or to quote them in addition to having a "work in progress". [0248] This Internet Draft will expire on February 12, 2011. Copyright Notice [0249] Copyright (c) 2010 IETF Trust and persons identified as authors of the document. All rights reserved. [0250] This document is subject to BCP 78 and the Legal Supply of IETF Trust related to IETF documents. (http://trustee.ietf.org/licence-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to that document. The code components extracted from this document must include text for the Simplified BSD License as described in Section 4.e of Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. [0251] This document specifies an FEC Scheme for the RaptorQ feed error correction code for object distribution applications. The concept of an FEC Scheme is defined in RFC5052 [RFC5052] and this document follows the format prescribed here and uses the terminology of that document. The RaptorQ code described here is a next generation of RaptorQ code described in RFC5053 [RFC5053]. The RaptorQ code provides superior reliability, better coding efficiency, and support for larger source block sizes than the RaptorQ code of EFC5053 [RFC5053]. These improvements simplify the use of RaptorQ code in an Object Distribution Content Distribution Protocol compared to RFC5053 [RFC5053]. [0252] The RaptorQ FEC Scheme is a Fully Specified FEC Scheme corresponding to FEC Coding ID 6 (tbc). [0253] Editor's note: The finalized FEC encoding ID has yet to be defined, but '6 (tbc)' is used as a temporary value in this Internet Draft pending the sequential use of FEC encoding IDs in the IANA registration process. [0254] RaptorQ is a source code, that is, as many coding symbols as necessary can be generated by the coded immediately from the source symbols of a block. The decoder can retrieve the source block from any set of encoding symbols only slightly greater in number than the number of source symbols. [0255] The code described in this document is a systematic code, that is, the original source symbols can be sent in an unmodified form from the sender to the recipient, in addition to a number of repair symbols. For more information on the use of Forward Error Correction codes in a very reliable broadcast, see [RFC3453]. 2. Annotation of Requirements [0256] The key words "MUST", "MUST NOT", "NEEDED", "CAN", "CANNOT", "RECOMMENDED", "OPTIONAL" in this document should be interpreted as described in [RFC-2119]. 3. Formats and Codes 3.1 FEC Payload IDs [0257] The FEC payload ID MUST be a 4-octet field defined as follows: [0258] Source Block Number (SBN), (8 bits, unsigned integer): A non-negative integer identifier for the source block with which the encoding symbols within the package are to be related. [0259] Encoding Symbol ID, (24 bits, unsigned integer): A non-negative integer identifier for encoding symbols within the package. [0260] The interpretation of the Source Block Number and the Coding Symbol Identifier is defined in Section 4. 3.2 FEC Object Transmission Information 3.2.1 Mandatory [0261] The value of the FEC Encoding ID MUST be 6, as designated by IANA (see Section 7). 3.2.2 Common [0262] The elements of Common FEC Object Transmission Information used by this FEC Scheme are: Transfer Length (F), (40 bits, unassigned integer): A non-negative integer that has a maximum of 94627087480. This is the transfer length of the object in units of octets. [0263] Symbol Size (T), (16 bits, unsigned integer): A positive integer that is less than 2⌃⌃16. This is the size of a symbol in units of octets. [0264] The format of the encoded Common FEC Object Transmission Information is illustrated in figure 2. [0265] Note 1: The limit of 946270874880 in the transfer length is a consequence of the limitation of the symbol size to 2⌃⌃16-1, the limitation of the number of symbols in a source block to 56403 and the limitation of the number of source blocks to 2 ⌃⌃8. 3.2.3 Schema Specific [0266] The following parameters are carried in the Schema Specific FEC Object Transmission Information element for that FEC Schema: The number of source blocks (Z) (12 bits, unsigned integer) The number of sub-blocks (N) (12 bits, unsigned integer) A symbol (Al) alignment parameter (8 bits, unsigned integer) [0267] These parameters are all positive integers. The Coded Scheme Specific Object Transmission Information is a 4-octet field consisting of parameters Z, N and A1 as illustrated in figure 3. [0268] The encoded FEC Object Transmission Information is a 12 octet field consisting of the concatenation of the encoded Common FEC Object Transmission Information and the Coded Schema Specific FEC Object Transmission Information. [0269] These three parameters define the source block division as described in Section 4.4.1.2. 4. Procedures 4.1 Introduction [0270] For any undefined symbol or functions used in this section, in particular the "ceiling" and "floor" functions, refer to Section 5.1. 4.2 Content Delivery Protocol Requirements [0271] This section describes the information exchange between the FEC RaptorQ Scheme and any Content Distribution Protocol (CDP) that makes use of the FEC RaptorQ Scheme for object distribution. [0272] - que o objeto seja codificado, F octetos O esquema codificador RaptorQ supre o CDP com a informação a seguir para cada pacote a ser enviado: - Número de Bloco Fonte (SBN); - ID de Símbolo de Codificação (ESI); - Símbolo de Codificação (s). O CDP DEVE comunicar essa informação para o receptor.The RaptorQ encoder scheme and the RaptorQ decoder scheme for object distribution require the following information from the CDP: The transfer length of the object, F, in octets; A symbol alignment parameter, A1 The symbol size, T, in octets, which MUST be a multiple of A1 The number of source blocks, Z The number of sub-blocks in each source block, N The RaptorQ coding scheme for object distribution additionally requires: - that the object be encoded, F octets The RaptorQ coding scheme supplies the CDP with the following information for each packet to be sent: - Source Block Number (SBN); - Coding Symbol ID (ESI); - Coding symbol (s). The CDP MUST communicate this information to the recipient. [0273] - F, o comprimento de transferência do objeto, em octetos; - WS o tamanho máximo de bloco é decodificável na memória de trabalho, em octetos; - P', o tamanho máximo de carga útil em octetos, que é considerado um múltiplo de A1; - A1, o parâmetro de alinhamento de símbolo, em octetos; - SS, um parâmetro onde o limite inferior desejado no tamanho de subsímbolo é SS*A1; - K'_max, o número máximo de símbolos fonte por bloco fonte. Nota: A seção 5.1.2 define K'_max como sendo igual a 56403.This section provides recommendations for the derivation of three transport parameters, T, Z and N. This recommendation is based on the following input parameters: - F, the transfer length of the object, in octets; - WS the maximum block size can be decoded in working memory, in octets; - P ', the maximum payload size in octets, which is considered a multiple of A1; - A1, the symbol alignment parameter, in octets; - SS, a parameter where the desired lower limit on the subsymbol size is SS * A1; - K'_max, the maximum number of source symbols per source block. Note: Section 5.1.2 defines K'_max as being equal to 56403. [0274] - T = P' - Kt = teto(F,T); - N_max = piso(T,(SS*A1)); - para todos n-1,...,N_max; * KL(n) é o valor K' máximo na Tabela 2 na Seção 5.6 de modo que - K'<= WS/(A1*(teto(T/A1*n)))) - Z = teto(Kt/KL(N_max)) - N é o n=1 mínimo, ...,N_max de modo que teto(Kt/Z)<= KL(n) Based on the above entries, transport parameters T, Z and N are calculated as follows: Consider: - T = P ' - Kt = ceiling (F, T); - N_max = floor (T, (SS * A1)); - for all n-1, ..., N_max; * KL (n) is the maximum K 'value in Table 2 in Section 5.6 so that - K '<= WS / (A1 * (ceiling (T / A1 * n)))) - Z = ceiling (Kt / KL (N_max)) - N is on = 1 minimum, ..., N_max so that ceiling (Kt / Z) <= KL (n) [0275] It is RECOMMENDED that each package contains exactly one symbol. However, receivers MUST support receiving packets that contain multiple symbols. [0276] The Kt value is the total number of symbols needed to represent the object's source data. [0277] The above algorithm e. the one defined in Section 4.4.1.2 ensures that the subsymbol sizes are a multiple of the symbol alignment parameter, Al. This is useful since the sum operations used for encoding and decoding are generally performed several octets at a time, for example at least 4 octets at a time on a 32-bit processor. In this way, encoding and decoding can be performed more quickly if the subsymbol sizes are a multiple of that number of octets. [0278] The recommended setting for input parameter A1 is 4. [0279] The WS parameter can be used to generate encrypted data that can be decoded efficiently with limited working memory in the decoder. Note that the actual maximum decoder memory requirement for a given WS value depends on the implementation, but it is possible to implement the decoding using the working memory only slightly larger than WS. 4.4 Object Distribution 4.4.1 Construction of Source Block 4.4.1.1 General [0280] In order to apply the RaptorQ encoder to a source object, the object can be divided into Z> = 1 blocks, known as source blocks. The RaptorQ encoder is applied independently to each source block. Each source block is identified by a unique Source Block Number (SBN), where the first source block has SBN equal to zero, the second has SBN equal to one, etc. Each source block is divided into a number, K, of font symbols of T octet size each. Each source symbol is identified by a unique Coding Symbol Identifier (ESI), where the first source symbol of a source block has an ESI equal to zero, the second has an ESI equal to one, etc. [0281] Each source block with K source symbols is divided into N> = 1 sub-blocks, which are small enough to be decoded in the working memory. Each sub-block is divided into K subsymbols of size 'Τ'. [0282] Note that the value of K is not necessarily the same for each source block of an object and the value of 'T' may not necessarily be the same for each subblock of a source block. However, the symbol size T is the same for all source blocks of an object and the number of symbols, K, is equal for each sub-block of a source block. The exact division of the object into source blocks and sub-blocks is described in Section 4.4.1.2 below. 4.4.1.2 Source block and sub-block division [0283] - F, o comprimento de transferência do objeto, em octetos; - A1, um parâmetro de alinhamento de símbolo, em octetos; - T, o tamanho do símbolo, em octetos, que DEVE ser um múltiplo de A1; - Z, o número de blocos fonte; - N, o número de sub-blocos em cada bloco fonte. Esses parâmetros DEVEM ser determinados de modo que teto(teto(F/T)/Z) <= K'_max. As recomendações para derivação desses parâmetros são fornecidas na Seção 4.3.The construction of source and sub-blocks is determined based on five input parameters, F, A1, T, Z and N and a Division function []. The five input parameters are defined as follows: - F, the transfer length of the object, in octets; - A1, a symbol alignment parameter, in octets; - T, the size of the symbol, in octets, which MUST be a multiple of A1; - Z, the number of source blocks; - N, the number of sub-blocks in each source block. These parameters MUST be determined so that ceiling (ceiling (F / T) / Z) <= K'_max. Recommendations for deriving these parameters are provided in Section 4.3. [0284] The Division [] function takes a pair of positive integers (I, J) as an input and derives four non-negative integers (IL, IS, JL, JS) as an output. Specifically, the value of Division [I, J] is the sequence (IL, IS, JL, JS), where IL = ceiling (I / J), IS = floor (l / J), JL = I - IS * J and JS = J - JL. Division [] derives the parameters for dividing a block of size I into J blocks of approximately equal size. Specifically, the JL blocks of length IL and the JS blocks of length IS. [0285] - Kt = teto (F/T); - (KL, KS, ZL, ZS) = Divisão [Kt, Z] ; - (TL, TS, NL, NS) = Divisão [T/A1, N]. The source object MUST be divided into source blocks and sub-blocks as follows: consider yourself - Kt = ceiling (F / T); - (KL, KS, ZL, ZS) = Division [Kt, Z]; - (TL, TS, NL, NS) = Division [T / A1, N]. [0286] Then, the object MUST be divided into Z = ZL + ZS contiguous source blocks, the first ZL source blocks each having KL * T octets, that is, KL source symbols of T octets each, and the remaining ZS source blocks having , each, KS * T octets, that is, KS source symbols of T octets each. [0287] If Kt * T> F, then, for coding purposes, the last symbol of the last source block MUST be filled in at the end with Kt * T-F octets zero. [0288] Next, each source block with K source symbols MUST be divided into N = NL + NS contiguous sub-blocks, the first NL sub-blocks each consisting of K contiguous subsymbols of size TL * A1 octets and the NS sub - remaining blocks each consisting of K contiguous subsymbols of TS * A1 octet size. The symbol alignment parameter A1 ensures that the subsymbols are always a multiple of A1 octets. [0289] Finally, the symbol m of a source block consists of the concatenation of the subsymbol m from each of the N sub-blocks. Note that this implies that when N> 1, then a symbol is NOT a contiguous part of the object. 4.4.2 Building a coding package [0290] - Número de Bloco Fonte (SBN); - ID de Símbolo de Codificação (ESI); - símbolo de codificação (s). Each encoding package contains the following information: - Source Block Number (SBN); - Coding Symbol ID (ESI); - coding symbol (s). [0291] Each source block is coded independently of the others. The source blocks are consecutively numbered a. from zero. [0292] The encoding symbol ID values from 0 to K-1 identify the source symbols of a source block in sequential order, where K is the number of source symbols in the source block. The K-encoding symbol IDs onwards identify repair symbols generated from the source symbols using the RaptorQ encoder. [0293] Each coding package consists entirely of source symbols (source package) or completely repair symbols (repair package). A package can contain any number of symbols from the same source block. In case the last source symbol in a source package includes padding octets added for the purpose of FEC coding, then that octet does not need to be included in the package. Otherwise, only all symbols MUST be included. [0294] The encoding symbol ID, X, carried in each source package is the encoding symbol ID of the first source symbol carried in the package. Subsequent source symbols in the package have encoding symbol IDs, X + 1 to X + G + 1, in sequential order, where G is the number of symbols in the package. [0295] Similarly, the encoding symbol ID, X, located in a repair package is the encoding symbol ID of the first repair symbol in the repair package, and subsequent repair symbols in the package have X encoding symbol IDs. +1 to X + G + 1 in sequential order, where G is the number of symbols in the package. [0296] Note that it is not necessary for the receiver to know the total number of repair packages. 5. RaptorQ FEC Code Specification 5.1 Definitions, Symbols and Abbreviations [0297] For the purpose of specifying FEC RaptorQ code in this section, the following definitions, symbols and abbreviations apply. 5.1.1. Definitions [0298] - Source block: a block of K source symbols that are considered together for RaptorQ encoding and decoding purposes; - Extended source block: a block of source symbols K ', where K'> = K constructed from a source block and zero or more filling symbols; - Symbol: a data unit. The size, in octets, of a symbol is known as the symbol size. The symbol size is always a positive integer. - Source symbol: the smallest data unit used during the encoding process. All font symbols within a font block are the same size. - filling symbol: a symbol with all zero bits that is added to the source block to form the extended source block; - Encoding symbol: a symbol that can be sent as part of the encoding of a source block. The coding symbols for a source block consist of source symbols for the source block and repair symbols generated from the source block. The repair symbols generated from a source block are the same size as the source symbols of that source block; - Repair symbol: the coding symbols of a source block that are not source symbols. The repair symbols are generated based on the source symbols of a source block; - Intermediate symbols: symbols generated from source symbols using an inverted encoding process. The repair symbols are then generated directly from the intermediate symbols. The encoding symbols do not include the intermediate symbols, that is, the intermediate symbols are not sent as part of the encoding of a source block. The intermediate symbols are divided into LT symbols and PI symbols. - LT symbols; A subset of intermediate symbols that can be LT neighbors of an encoding symbol. - PI symbols: A subset of intermediate symbols that can be PI neighbors of a coding symbol; - Systematic code: a code in which all source symbols are included as part of the coding symbols for a resource block. The RaptorQ code as described here is systematic code. - Encoding symbol ID (ESI): the information that uniquely identifies each encoding symbol associated with a source block for sending and receiving purposes. - Internal symbol ID (ISI): the information that uniquely identifies each symbol associated with an extended source block for encoding and decoding purposes; - Arithmetic operations on octets and symbols and matrices: The operations that are used to produce encoding symbols from source symbols and vice versa. See Section 5.7. 5.1.2 Symbols [0299] i, j, u, v, h, d, a, b, d1, a1, V, m, x, y represent values or variables of one type or another, depending on the context. X denotes a non-negative integer value which is either an ISI value or an ESI value, depending on the context. ceiling (x) denotes the smallest integer that is greater than or equal to x, where x is one. real value. floor (x) denotes the largest integer that is less than or equal to x, where x is a real value. min (x, y) denotes the minimum value of the x and y values, and in general the minimum value of all argument values. max (x, y) denotes the maximum value of the x and y values, and in general the maximum value of all argument values, i% j denotes i module j. i + j denotes the sum of i and j. If iej are octets, respectively symbols, this designates the arithmetic of octets, respectively symbols, as defined in Section 5.7. If i and j are integers, then this denotes the addition of normal integers. i * j denotes the product of iej. If iej are octets, this designates octet arithmetic, as defined in Section 5.7. If i is an octet and j is a symbol, this denotes the multiplication of a symbol by an octet, as also defined in Section 5.7. Finally, if i and j are integers, i * j denotes the normal product of integers. a⌃⌃b denotes the operation of an elevation to power b. If a is an octet and b is a non-negative integer, this is understood to mean a * a * ... * a (terms b), with '*' being the product of the octet as defined in Section 5.7. u ⌃v denotes, for bit strings of the same length u and v, exclusive-or in the sense of bit u and v. Transposition [A] denotes the transposed matrix of matrix A. In this specification, all matrices have entries that are octets. ⌃⌃ -1 denotes the inverted matrix of matrix A. In this specification, all matrices have octets as inputs, so that it is understood that the operations of matrix inputs must be done as outlined in Section 5.7 and Α⌃⌃-1 is the inversion of A matrix with respect to octet arithmetic. K denotes the number of symbols in a single source block. K 'denotes the font number plus padding symbols in an extended font block. For most of this specification, padding symbols are considered to be additional source symbols. K'_max denotes the maximum number of source symbols that can be in a single source block. Configured for 56403. L denotes the number of intermediate symbols for a single extended source block. S denotes the number of LDPC symbols for a single extended source block. These are LT symbols. For each value of K 'illustrated in Table 2, Section 5.6., The corresponding value of S is a prime number. H denotes the number of HDPC symbols for a single extended source block. These are PI symbols. B denotes the number of intermediate symbols that are LT symbols excluding LDPC symbols. W denotes the number of intermediate symbols that are LT symbols. For each value of K 'in Table 2 illustrated in Section 5.6., The corresponding value of W is a prime number. P denotes the number of intermediate symbols that are PI symbols. These contain all the HDPC symbols. P1 denotes the smallest prime number greater than or equal to P. U denotes the number of non-HDPC intermediate symbols that are PI symbols. C denotes a set of intermediate symbols C [10], C [1], C [2], ..., C [L-1]. C 'denotes a set of symbols from the extended source block, where C' [0], C '[1], C' [2], ..., C '[K-1] are then source symbols of the source block and C '[K], C' [K + 1] are the source symbols of the source block and C '[K], C' [K +], ..., C '[K'-1] are filling symbols. V0, V1, V2, V3 denote four sets of 32-bit unsigned integers, V0 [0], V0 [1], ..., V0 [255]; V1 [0], V1 [1], ..., V1 [255]; V2 [0], V2 [1], ..., V2 [255]; and V3 [0], V3 [1], ..., V3 [255] are illustrated in Section 5.5. Rated [y, i, m] denotes a pseudo-random number generator; Deg [v] denotes a degree generator Enc [K1, C, (d, a, b, d1, a1, b1)] denotes a coding symbol generator. Tuple [K ', X] denotes a tuple generator function. T denotes the symbol size in octets; J (K ') denotes the systematic index associated with K'. G denotes any generator matrix. I_S denotes the SxS identity matrix. 5.1.3 Abbreviations [0300] ESI encoding symbol ID HDPC High Density Parity Check ISI internal symbol ID LDPC Low Density Parity Check LT Luby Transformation PI Permanently disabled SBN Source Block Number SBL Source Block Length (in symbol units) 5.2 Overview [0301] This section defines the systematic FEC RaptorQ code. Symbols are fundamental data units of the encoding and decoding process. For each source block, all symbols have the same size, referred to as the T symbol size. The atomic operations performed on the symbols for encoding and decoding are arithmetic operations defined in Section 5.7. [0302] The basic encoder is described in Section 5.3. The encoder first derives a block of intermediate symbols from the source symbols of a source block. This intermediate block has the property that both source and repair symbols can be generated from it using the same process. The encoder produces repair symbols from the middle block using an efficient process, where each repair symbol is the unique OR of a small number of intermediate symbols in the block. The source symbols can also be reproduced from the middle block using the same process. The coding symbols are the combination of source and repair symbols. [0303] An example of a decoder is described in Section 5.4. The process for producing source and repair symbols from the middle block is designed so that the middle block can be retrieved from any sufficiently large set of coding symbols, regardless of the mix of source and repair symbols in the set. Once the middle block is retrieved, the missing source symbols from the source block can be retrieved using the encoding process. [0304] The requirements for a RaptorQ-compliant decoder are provided in Section 5.8. A number of de-coding algorithms are possible to achieve these requirements. An efficient decoding algorithm that can achieve these requirements is provided in Section 5.4. [0305] The construction of the intermediate and repair symbols is based in part on a pseudo-random number generator described in Section 5.3. This generator is based on a fixed set of 1024 random numbers that must be available to both the sender and the recipient. These numbers are provided in Section 5.5. The encoding and decoding operations for RaptorQ use octet operations. Section 5.7 describes how to perform these operations. [0306] Finally, the construction of the intermediate symbols from the source symbols is governed by the values of "systematic indexes" of which are provided in Section 5.6 for specific extended source block sizes between 6 and K'_max = 56403 source symbols. In this way, the RaptorQ code supports a source block with between 1 and 56403 source symbols. 5.3 Systematic RaptorQ encoder 5.3.1 Introduction [0307] For a given source block of K source symbols, for encoding and decoding purposes, the source block is augmented with additional K'-K padding symbols, where K 'is the lowest value that is at least K in Table 2 of the systematic index Section 5.6. The reason for filling a source block for a multiple of K 'is to allow faster encoding and decoding, and to minimize the amount of table information that needs to be stored in the encoder and decoder. [0308] For the purposes of transmitting and receiving data, the value of K is used to determine the number of source symbols in a source block, and thus K must be known by the sender and the recipient. In that case, the sender and the recipient can compute K 'from K' and K'-K padding symbols can be automatically added to the source block without any further communication. The encoding symbol ID (ESI) is used by a sender and recipient to identify the encoding symbols in a source block, where the encoding symbols in a source block consist of source symbols and repair symbols associated with the source block . For a source block with K source symbols, the ESIs for the source symbols are 0, 1, 2, ..., K-1 and the ESIs for the repair symbols are K, K + 1, K + 2, .. Using ESI to identify coding symbols in the transport, ensure that the ESI values continue consecutively between the source and repair symbols. [0309] For data encoding and decoding purposes, the value of K 'derived from K is used as the number of source symbols in the extended source block on which the encoding and decoding operations are carried out, where the source symbols K' consist of original K source symbols and additional K'-K padding symbols. The internal symbol ID (ISI) is used by the encoder and decoder to identify the symbols associated with the extended source block, that is, for the generation of encoding symbols and for decoding. For a source block with K original source symbols, the ISIs for the original source symbols are 0, 1, 2, ..., K-1, the ISIs for the K'-K padding symbols are K, K + 1, K + 2, ..., K'-1, and the ISIs for the repair symbols are K ', K' + 1, Κ '+ 2, ... Using ISI for encoding and decoding it is possible that the filling symbols of the extended source block are treated in the same way as the other source symbols of the extended source block, and that a repair symbol prefix determiner is generated in a consistent manner for a given number K 'of source symbols in the extended source block regardless of K. [0310] The relationship between ESIs and ISIs is simple, the ESIs and ISIs for the original K source symbols are the same, the K'-K padding symbols have an ISI, but do not have a corresponding ESI (since they are symbols that do not sent or received), and a repair symbol IS is simply the ESI of the repair symbol plus K'-K. The translation between ESIs used to identify the sent and received encoding symbols and the corresponding ISIs used for encoding and decoding, and the proper filling of the extended source block with filling symbols used for encoding and decoding is the responsibility of the filling function in the encoder / RaptorQ decoder. 5.3.2 Coding Overview [0311] The systematic RaptorQ encoder is used to generate any number of repair symbols from a source block consisting of K source symbols located in an extended C source block. Figure 4 illustrates the coding overview. [0312] The first step in coding is the construction of an extended source block by adding zero or greater padding symbols so that the total number of K symbols is one of the values listed in Section 5.6. Each padding symbol consists of T octets where the value of each octet is zero. K 'MUST be selected as the lowest K' value from the table in Section 5.6 that is greater than or equal to K. [0313] Let C '[0], ..., C' [K-1] denote the source symbols K. Let C '[K], ..., C; [K'-1] denote the padding symbols K'-K, which are all set to zero bits. So, C '[0], ..., C'[K'-1] are the symbols of the extended source block in which encoding and decoding are performed. [0314] In the remainder of that description, these filling symbols will be considered as additional source symbols and referred to as such. However, these padding symbols are not part of the coding symbols, that is, they are not sent as part of the coding. In a receiver, the value of K 'can be computed based on K, so the receiver can insert K'-K padding symbols at the end of a source block of K' source symbols and retrieve the remaining K source symbols from the source block from the received coding symbols. [0315] The second step of coding is to generate a number, L> K ', of intermediate symbols from the K source symbols. In this step, K 'tuples source (d [0], a [0], b [0], d1 [0], a1 [0], b1 [0]), ..., (d [K'-1 ], a [K'- '], b [K'-1], d1 [K'-1], a1 [K'-1], b1 [K'-1)) are generated using the Tuple generator [ ] as described in Section 5.3.5.4. The source tuples and the ISTs associated with the source symbols K 'are used to determine L intermediate symbols C [0], ..., C [L-1] from the source symbols using an inverted encoding process. This process can be performed by a RaptorQ decoding process. [0316] Certain "pre-coding relationships" must remain within the intermediate symbols L. Section 5.3.3.3. describes these relationships. Section 5.3.3.4. describes how intermediate symbols are generated from source symbols. [0317] Once the intermediate symbols have been generated, the repair symbols can be produced. For a repair symbol with ISl X> K ', the negative ruptures, (d, a, b, d1, a1, b1) can be generated, using the Tuple generator [] as described in Section 5.3.5.4. Then, the tuple (d, a, b, d1, a1, b1) and the ISI X are used to generate the corresponding repair symbol from the intermediate symbols using the Encll generator described in Section 5.3.5.3. The corresponding ESI for this repair symbol is then X- [K'-K]. Note that the source symbols of the extended source block can also be generated using the same process, that is, for any X <K ', the symbol generated using this process has o) same value as C '[X]. 5.3.3. First coding step. Generation of Intermediate Symbol 5.3.3.1 General [0318] 1. Os sírrbolos intermediários são relacionados com os símbolos fonte por um conjunto de tuples de símbolo fonte e pelos ISIs dos símbolos fonte. A geração dos tuples de símbolo fonte é definida na Seção 5.3.3.2 utilizando o gerador de Tuple [] como descríto na Seção 5.3.5.4. 2. Um número de relações de Pré-codificação mantém dentro dos símbolos intermediáríos Propriarnente ditos. Esses são definidos na Seção 5.3.3.3. This coding step is a pre-coding step to generate L intermediary symbols C [0], ..., C [L-1] from the source symbols C '{0}, ..., C'[K'- 1], where L> K 'is defined in Section 5.3.3.3. Intermediate symbols are uniquely defined by two sets of restrictions: 1. Intermediate symbols are related to the source symbols by a set of source symbol tuples and the ISIs of the source symbols. The generation of the source symbol tuples is defined in Section 5.3.3.2 using the Tuple generator [] as described in Section 5.3.5.4. 2. A number of Pre-coding relations keep within the said intermediate symbols. These are defined in Section 5.3.3.3. [0319] The generation of the intermediate symbols L is then defined in Section 5.3.3.4. 5.3.3.2 Source Symbol Tuples [0320] Each of the K 'source symbols is associated with a source symbol tuple (d [X], a [X], b [X], d1 [X], a1 [X], b1 [X]) for 0 <= X <K '. The source symbol tuples are determined using the Tuple generator defined in Section 5.3.5.4. like: For each X, 0 <= X <K ' (d [X], a [X], b [x], d1 [X], a1 [X], b1 [X]) = Tuple [K, X] 5.3.3.3. Pre-Coding Relationships [0321] - S = S(K') - Η = Η (Κ') - W = W(Κ') -L = K' + S + H - Ρ = L - W - P1 denota o menor número primo superior a ou igual a P - U = p - H - B = W - S - C[0],...,C[B-1) denota os símbolos intermediários que são símbolos LT mas não símbolos LDPC. - C[B],...,C[B+S-1] denotam os símbolos LDPC que também são símbolos LT. - C[W],...,C[W+U-1] denota os símbolos intermediários que sao símbolos PI, mas não símbolos HDPC. - C[L-H],..., C[L-1] denota os H símbolos HDPC que também são símbolos PI. Pre-coding relationships between intermediate symbols L are defined by the requirement that a set of linear combinations S + H of the intermediate symbols be evaluated for error. There are S LDPC and H HDPC symbols, and thus, L = K '+ S + H. Another division of the L intermediary symbols occurs in two sets, one set of W symbols LT and the other set of P symbols PI, and, thus, it is also the case of L = W + P. The PI symbols are treated differently from the LT symbols in the coding process. P PI symbols consist of H HDPC symbols together with a set of U = PH of other intermediate symbols K '. The LT symbols consist of LDPC symbols together with WS of the other intermediate symbols K '. The values of these parameters are determined from K 'as described below where H (K'), S (K '), and W (K') are derived from Table 2 in Section 5.6. Consider yourself - S = S (K ') - Η = Η (Κ ') - W = W (Κ ') -L = K '+ S + H - Ρ = L - W - P1 denotes the smallest prime number greater than or equal to P - U = p - H - B = W - S - C [0], ..., C [B-1) denotes intermediate symbols that are LT symbols but not LDPC symbols. - C [B], ..., C [B + S-1] denote LDPC symbols which are also LT symbols. - C [W], ..., C [W + U-1] denotes intermediate symbols that are PI symbols, but not HDPC symbols. - C [LH], ..., C [L-1] denotes the H HDPC symbols which are also PI symbols. [0322] - Incializar os símbolos D[0] = C[B],..., D[S-1] C [B+S-1]. Com i = 0,...,B-1a = 1 + piso (i/S)b = i % SD[b] = D [b] + C[i]b = (b+a) % SD [b] = D[b] + C[i]b = (b + a) % SD [b] = D [b] + C [i]Com i = 0,...,S-1a = 1 % Pb = (i+1) % PD [i] = D[i] + C[w+a] + C[w+b]The first set of pre-coding relations, called LDPC relations, is described below and requires that at the end of this Process the set of symbols D [0], ..., D [S-1] must be all zeroes. - Initialize the symbols D [0] = C [B], ..., D [S-1] C [B + S-1]. With i = 0, ..., B-1 a = 1 + floor (i / S) b = i% S D [b] = D [b] + C [i] b = (b + a)% S D [b] = D [b] + C [i] b = (b + a)% S D [b] = D [b] + C [i] With i = 0, ..., S-1 a = 1% P b = (i + 1)% P D [i] = D [i] + C [w + a] + C [w + b] [0323] Remember that the addition of symbols must be performed as specified in Section 5.7. Note that the LDPC relations as defined in the above algorithm are linear, so that there is an SXB G_LDPC matrix, 1 and an SXP G_LDPC matrix, 2 so that G_LDPC, 1 * Transpose [C [0], ..., C [B-1])] + G_LDPC, 2 * Transpose (C [W], ..., C [W + P-1]) + Transpose [(C [B], ..., C [B + S-1])] = 0 (The matrix G_LDPC, 1 is defined by the first circuit in the above algorithm, and G_LDPC, 2 can be deduced from the second circuit). [0324] - alfa denota o octeto representado pelo inteiro 2 como definido na Seção 5.7. - MT denota uma matriz H x (K' + S) de octetos, onde para j = 0,...,K1+S-2 a entrada MT[i,j] é o octeto representado pelo inteiro 1 se i=Rand[j+i, 6, H] ou i = (Rand [j,+ i, 6, H] + Rand[j + i, 7, H-l] +1) % H e MT[i, j] é o elemento zero para todos os outros valores de i, e para j = K' + S - 1, MT [i, j] = alfa ⌃⌃i para i = 0,..., H-1. Gama denota uma matriz (K' + S) x (K' + S) de octetos, ondegama [i, j].alfa ⌃⌃(i-j) para i >= j,0 do contrário.The second set of relations between the intermediate symbols C [0], ..., C [L-1] are the HDPC relations and they are defined as follows: - alpha denotes the octet represented by the integer 2 as defined in Section 5.7. - MT denotes a matrix H x (K '+ S) of octets, where for j = 0, ..., K1 + S-2 the entry MT [i, j] is the octet represented by the integer 1 if i = Rand [j + i, 6, H] or i = (Rand [j, + i, 6, H] + Rand [j + i, 7, Hl] +1)% H and MT [i, j] is the element zero for all other values of i, and for j = K '+ S - 1, MT [i, j] = alpha ⌃⌃i for i = 0, ..., H-1. Gamma denotes a matrix (K '+ S) x (K' + S) of octets, where gamma [i, j]. alpha ⌃⌃ (ij) for i> = j, 0 otherwise. [0325] Then, the relationship between the first K '+ S intermediate symbols C [0], ..., C [K' + S - 1] and the H HDPC symbols C [K '+ S], ..., C [ K '+ S + H -1] is provided by: Transpose [C [K '+ S], ..., C [K' + S + H-1] + MT * range * Transpose [C [0], ..., C [K '+ S-1] = 0 where '*' represents the standard matrix ratio using the octet multiplication to define the multiplication between an octet matrix and a symbol matrix (in particular the symbol column vector) and '+' denotes the addition through the vectors of octet. 5.3.3.4 Intermediate Symbols 5.3.3.4.1 Definition [0326] According to the source symbols K 'C' [0], C '[1], ..., C'[K'-1] the L intermediate symbols C [0], C [1], ..., C [L-1] are the uniquely defined symbol values that satisfy the following conditions: 1. The K 'source symbols C' [0], C '[1], ..., C'[K'-1] satisfy the restrictions K ' C '[X] = Enc [K', (C [O], ..., C [Ll]), (d [X], a [X], b [X], d1 [X], a1 [ X], b1 [X])], for all X, 0 <= X <K '; where (d [X], a [X], b [X], d1 [X], a1 [X], b1 [X])] = Tuple [K ', X], Tuple [] is defined in Section 5.3 .5.4 and Enc [] is described in Section 5.3.5.3. 2. The intermediate L symbols c [0], C [1], ..., C [L-1] satisfy the pre-coding relationships defined in Section 5.3.3.3. 5.3.3.4.2 Illustrative Method for Calculating Intermediate Symbols [0327] This section describes a possible method for calculating L intermediate symbols C [C], C [1], ..., C [L-1] satisfying the restrictions in Section 5.3.3.4.1. [0328] - C denota o vetor de coluna de L símbolos intermediários C[0], C[1],...,C[L-1]. - D denota o vetor de coluna consistindo de S+H símbolos zero seguidos por K' símbolos fonte C'[0], C'[1],...,C'[K'-1]. The intermediate L symbols can be calculated as follows: Consider yourself - C denotes the column vector of L intermediate symbols C [0], C [1], ..., C [L-1]. - D denotes the column vector consisting of S + H zero symbols followed by K 'source symbols C' [0], C '[1], ..., C'[K'-1]. [0329] So, the above restrictions define an L x LA octet matrix so that: A * C = D Matrix A can be constructed as follows: Consider yourself - G_LDPC, 1 and G_LDPC, 2 as the matrices S x B and S x P as defined in Section 5.3.3.3. - G_HDPC as the matrix Η x (K '+ S) so that G-HDPC * Transpose (C [0], ..., c'K '+ S-']) = Transpose (C [K * + S], ..., C '[L-1]) that is, G_HDPC = MT * RANGE I_S as the identity matrix S x S IH as the identity matrix H x H G_ENC as the matrix K 'x L so that G_ENC * Transpose E (C [0], ..., C [L-1])] = Transpose [C '[0], C' [l], ..., C '[K'-']) ], that is, G_ENC [i, j] = 1 If and only if C [j] is included in the symbols that are added together to produce Enc [K ', (C [0], ..., c [L-1], (d [i], a [i], b [i], d1 [i], a1 [i], b1 [i])] and G_ENC [i, j] = 0 otherwise. Then: [0330] The First S files of A are equal to G LDPC, 1 | I_s | G_LDPC, 2. The Next H files of A are equal to G_RDPCI I_H. The remaining K 'rows of A are equal to G ENC. Inatrix A is shown in the figure (Figure 5) below: [0331] Intermediate symbols can then be calculated as: C = (A ^^ - 1) * D The source tuples are generated so that for any K 'the matrix A has a total classification and is, therefore, reversible. This calculation can be performed by applying a RaptorQ decoding process to the source symbols K 'C' [0], C '[1], ..., C'[K'-1] or the L intermediate symbols C [0] , C [1], ..., C [L-1]. [0332] In order to efficiently generate the intermediate symbols from the source symbols, it is recommended that an efficient decoder implementation such as that described in Section 5.4 be used. 5.3.4 Second coding step: Coding [0333] In the second coding step, the repair symbol with ISI X (X> = K '), is generated by the application of the generator Enc [k', (C [0], C [1], ..., C [L -1], (d, a, b, d1, a1, b1)] defined in Section 5.3.5.3 for L intermediate symbols C [0], C [i], ..., C [L-1] using the tuple (d, a, b, d1, a1, b1) = Tuple [K ', X]. 5.3.5 Generators 5.3.5.1. Random Number Generator [0334] The Rand [y, i, m] random number generator is defined as follows, where y is a non-negative integer, i is a non-negative integer less than 256, and m is a positive integer and the value produced is an integer between 0 and m-1. Let V0, V1, V2 and V3 be sets provided in Section 5.5. Consider yourself x0 = (y + i) mod 2 ^^ 8 x1 - (floor (y / 2 ^^ 8) Η + i) mod 2 ^^ 8 x2 = (floor (y / 2 ^^ 16) + i) mod 2 ^^ 8 x3 = (floor (y / 2 ^^ 24) + i) mod 2 ^^ 8 Then Rand [y, i, m] = (V0 [x0] ^ V1 [x1] ^ V2 [x2] ^ V3 [x3])% m 5.3.5.2 Degree Generator [0335] The Deg [v] grade generator is defined as follows, where it sees a non-negative integer that is less than 2 ^^ 20 = 1048576. According to v, find the Index d in Table 1 so that f [dl] <= v <f [d], and set Deg [v] = min (d, w-2). Remember that W is derived from K 'as described in Section 5.3.3.3. [0336] The Ene encoding symbol generator [K ', (C [0], C [1], ..., C [L-1]), (d, a, b, d1, a1, b1)] assumes the following entries: K 'is the number of source symbols for the extended source block. Consider L, W, B, S, P and Pl as derivatives of K 'as described in Section 5.3.3.3. (C [0], C [1], ..., C [L-1]) is the set of L intermediate symbols (subsymbols) generated as described in Section 5.3.3.4. (d, a, b, d1, a1, b1) is a source tuple determined from IST X using the Tuple generator defined in Section 5.3.5.4, where d is a positive integer denoting a degree of LT coding symbol; a is a positive integer between 1 and W-1 inclusive; b is a non-negative integer between 0 and W-1 inclusive; d1 is a positive integer that has a value of 2 or 3 even denoting a degree of PT coding symbol, a1 is a positive integer between 1 and P1-1 inclusive; b1 is a non-negative integer between 0 and P1-1 inclusive. The encoding symbol generator produces a single encoding symbol as an output (referred to as a result), according to the following algorithm: result = C [b] * b = (b + a)% W * result = result + C [b] While (b1> = P) make bl = (b1 + a1)% P1 result = result + C [W + bl] For j = 1, ..., d1-1 do * b1 = (b1 + a1)% P1 * while (bl> = P) do b1 = (b1 + a1)% P1 * result = result + C [W + b1] Return result 5.3.5.4 Tuple generator [0337] The tuple generator Tuple [K ', X] assumes the following inputs: K '- number of source symbols in the extended source block X - An ISI Consider yourself L is determined from K 'as described in Section 5.3.3.3. J = J (K ') is the systematic index associated with K', as defined in Table 2 in Section 5.6. The output of the tuple generator is a tuple, (d, a, b, dl, al, bl), determined as follows: A = 53591 + J * 997 if (a% 2 == 0) {A = A +1} B = 10267 * (J + 1) y = (B + X * A)% 2 ⌃⌃32 v = Rand [y, 0, 2⌃⌃20] d = Deg [v] a = 1 + Rand [y, 1, w-1] b = Rand [y, 2, W] If (d <4) {d1 = 2 + Rand [X, 3, 2]} or {d1 = 2} a1 = 1 + Rand [X, 4, P1-1] b1 = Rand [X, 5, P1] 5.4 Illustrative FEC decoder 5.4.1 General [0338] This section describes an efficient decoding algorithm for the RaptorQ code introduced in that specification. Note that each encoding symbol received is a known linear combination of the intermediate symbols. So that each received coding symbol provides a linear equation among the intermediate symbols, which, together with the known linear pre-coding relationships among the intermediate symbols, provide a system of linear equations. In this way, any algorithm to solve the systems of linear equations can successfully decode the intermediate symbols and, in this way, the source symbols. However, the chosen algorithm has a major effect on the computational efficiency of the decoding. 5.4.2 Decoding an Extended Source Block 5.4.2.1 General [0339] The decoder is considered to know the structure of the source block that is about to decode, including the symbol size, T, and the number K of symbols in the source block and the number K 'of source symbols in the extended source block. [0340] From the algorithm described in Section 5.3, the RaptorQ decoder can calculate the total number L = K + S + H of intermediate symbols and determine how they are generated from the extended source block to be decoded. In this description it is assumed that the encoding symbols received for the extended source block to be decoded pass through the decoder. Additionally, for each encoding symbol, it is assumed that the number and set of intermediate symbols whose sum is equal to the encoding symbol will be passed to the decoder. In the case of source symbols, including padding symbols, the source symbol tuples described in Section 5.3.3.2 indicate the number and set of intermediate symbols that add up to provide each source symbol. [0341] Let N> = K 'be the number of received encoding symbols to be used for decoding, including the padding symbols for an extended source block, and let m = S + H + N. Then, with the annotation of Section 5.3.3.4,2, A * C = D is obtained. [0342] Decoding an extended source block is equivalent to decoding C from known A and D. It is clear that C can be decoded if and only if the classification of A is L. Once C has been decoded, the missing source symbols can be obtained by using the source symbol tuples to determine the number and set of intermediate symbols that must be added to obtain each missing source symbol. [0343] The first step in C decoding is to form a decoding schedule. In this step A is converted, using Gaussian elimination (using operations of row and reordering of row and column) and after the elimination of M - L rows, in the identity matrix of L by L. The decoding schedule consists of sequence of operations row and row and column rearrangements during the Gaussian elimination process, and only depends on A and not D. Decoding C from D can occur simultaneously with the formation of the decoding schedule or decoding can occur later with based on the decoding schedule. [0344] The correspondence between the decoding schedule and the C decoding is as follows. Let c [0] = 0, c [1] = 1, ..., c [L-1] = L-1 and d [0] = 0, d [1] = 1, ..., d [M -1] = M-1 initially. [0345] Each time a multiple, beta, of row i of A is added to row 1 'in the decoding schedule then in the decoding process the beta symbol * D [d [i]] is added to the symbol D [d [i'] ]. [0346] Each time a row i and A is multiplied by a beta octet, then in the decoding process the symbol D [d [i [[is also multiplied by beta. [0347] Each time a row i is exchanged with row i 'in the decoding schedule, then in the decoding process the value of d [i] is exchanged with the value of d [i']. [0348] Each time column j is exchanged with column j 'in the decoding schedule, then in the decoding process the value of c [j] is exchanged with the value of C [j']. [0349] From this correspondence, it is clear that the total number of operations on symbols in the decoding of the extended source block is the number of row operations (not exchanges) in the Gaussian elimination. Since A is the identity matrix L by L after the Gaussian elimination and after the elimination of the last ML rows, it is clear at the end of the successful decoding that the L symbols D [d (0]], D [d [1] ], ..., D [d [L-1]] are the values of L symbols C [c [0]], C [C [1]], ..., C [c [L-1]] . [0350] The order in which Gaussian elimination is performed to form the decoding schedule is not supported by whether or not the decoding was successful. However, the speed of decoding depends a lot on the order in which Gaussian elimination is performed. (Additionally, maintaining a sparse representation of A is crucial, although it is not described here). The remainder of this section describes an order in which Gaussian elimination can be performed that is relatively efficient. 5.4.2.2 First Phase [0351] In the first phase of Gaussian elimination, matrix A is conceptually divided into submatrices and in addition, an X matrix is created. This matrix has as many rows and columns as A, and it will be a lower triangular matrix throughout this first phase. At the beginning of this phase, matrix A is copied into matrix X. [0352] 1. A submatriz I definida pela interseção das primeiras fileiras i e as primeiras colunas i. Essa é a matriz de identidade no final de cada etapa na fase. 2. A submatriz definida pela interseção das primeiras fileiras i e todas menos as primeiras colunas i e as ultimas colunas u. Todas as entradas dessa submatriz são iguais a zero. 3. A submatriz definida pela interseção das primeiras colunas i e todas menos as primeiras fileiras i. Todas as entradas dessa submatriz são iguais a zero. 4. A submatriz U definida pela interseção de todas as fileiras e as ultimas colunas u. 5. A submatriz V formada pela interseção de todas menos as primeiras colunas i e as ultimas colunas u e todas menos as primeiras fileiras i. A figura 6 ilustra as submatrizes de A. No começo da primeira fase V = A. Em cada etapa, uma fileira de A é escolhida. [0353] The graph below defined by the structure of V is used in determining which row of A will be chosen. The columns that form the V intersection are the nodes on the graph, and the rows that have exactly two non-zero V entries and are not HDPC rows are the edges of the graph that connect the two columns (nodes) at the positions of two 1. A component in that graph is a maximum set of nodes (columns) and edges (rows) so that there is a path between each pair of nodes / edges in the graph. The size of a component is the number of nodes (column) in the component. [0354] There are at most L stages in the first stage. The phase ends successfully when i + u = L, that is, when V and all zero sub matrices above V have disappeared and A consists of I, the matrix of all zeros below I and U. The phase ends unsuccessfully at decoding failure if in a previous step V disappears, there is no non-zero row in V to choose from in this step. At each stage, a row of A is chosen as follows: If all V inputs are equal to zero, then no row is chosen and decoding fails. [0355] Consider r to be the minimum integer so that at least one row of A has exactly r ones in V. If r "= 2, then choose a row with exactly r ones in V with the minimum original grade among all said rows, except that HDPC rows should not be chosen until all non-HDPC rows have been processed. If r = 2, then choose any row with exactly 2 ones in V that is part of a maximum size component in the graph described above which is defined by V. [0356] After the row is chosen at this stage, the first row of A that forms an intersection with V is exchanged with the chosen row so that the chosen row is the first row that forms the intersection with V. The columns of A among those that form an intersection with V are reordered so that one of the r ones in the chosen row appears in the first column of V. The same row and column operations are performed in matrix X. Then, an appropriate multiple of the chosen row is added to all other rows of A below the chosen row that have a non-zero entry in the first column of V. Specifically, if a row below the chosen row has an entry beta in the first column of V, and the chosen row has an input alpha in the first column of V, then beta / alpha multiplied by the chosen row are added to that row to leave a value of zero in the first column of V. Finally, i is incremented by 1 and i is in incremented by r-1, which completes the stage. [0357] Note that efficiency can be improved if the row operations identified above are not actually performed until the affected row is properly chosen during the decoding process. This avoids processing the row operations for the rows that are not eventually used in the decoding process and, in particular, prevents those rows for which beta! = 1 until they are really needed. In addition, the necessary row operations for the HDPC rows can be performed for such rows in a process, using the algorithm described in Section 5.3.3.3. 5.4.2.3 Second Phase [0358] At that point, all entries of X outside the first i rows and i columns are discarded, so that X has a smaller triangular shape. The last i rows and columns of X are discarded, so that X now has i rows and i columns. Submatrix U is further divided into first rows i, U_upper, and the remaining M-i rows, U_lower. Gaussian elimination is performed in the second phase in U_lower to determine that its classification is less than u (decoding failure) or to convert it into a matrix where the first rows u are the identity matrix (success of the second phase). This is called the identity matrix of u by u I_u. The M-L rows of A that form a U_lower - I_u intersection are discarded. After this phase A has L rows and L columns. 5.4.2.4 Third Phase [0359] After the second phase, the only part of A that needs to be zeroed in order to finish converting A into an identity matrix L by L is U_upper. The number of rows i of the U_upper submatrix is generally much greater than the number of columns u of U_upper. Furthermore, at this point, the U_upper matrix is typically dense, that is, the number of non-zero entries in this matrix is large. To reduce this matrix to a sparse form, the sequence of operations performed to obtain the U_lower matrix needs to be inverted. For this purpose, the matrix X is multiplied by the submatrix of A consisting of the first rows i of A. After this operation the submatrix of A consisting of the intersection of the first rows ie columns is equal to X, whereas the matrix U_upper is transformed to a sparse shape. 5.4.2.5 Fourth Phase [0360] For each of the first rows i of U_upper do the following; if the row has a non-zero input at position j, and if so, row b X is row j of I_ u. After this step, the submatrix of A consisting of the intersection of the first rows ie columns is equal to X, the submatrix U_upper consists of zeros, and the submatrix consisting of the intersection of the last rows u and the first columns i consists of zeros, and the submatrix consisting of zeros of the last rows u and columns is the matrix I_u. 5.4.2.6 Fifth Phase [0361] 1. se A[j,j] não for igual a um, então dividir a fileira j de A por A[j,j]. 2. Para 1 a partir de 1 para j -1, se A[j,1] for diferente de zero, então adicionar A[j,1] multiplicado pela fileira 1 de A para fileira j de A. For j of 1 pair ai perform the following operations: 1. if A [j, j] is not equal to one, then divide row j of A by A [j, j]. 2. For 1 from 1 to j -1, if A [j, 1] is different from zero, then add A [j, 1] multiplied by row 1 of A to row j of A. [0362] Thereafter, phase A is the identity matrix L by L and a complete decoding schedule has been successfully formed. Then, the corresponding decoding consisting of the sum of known encoding symbols can be performed to retrieve the intermediate symbols based on the decoding schedule. Tuples associated with all source symbols are computed according to Section 5.3.3.2. The tuples for the received source symbols are used for decoding. The tuples for the missing source symbols are used to determine which intermediate symbols need to be added to recover the missing source symbols. 5.5 Random Numbers [0363] The four sets V0, V1, V2, V3 used in Section 5.3.5.1 are provided below. There are 256 entries in each of the four sets. Indexing in each set starts at 0, and the entries are 32-bit unsigned integers. [0364] Table 2 below specifies the supported values of K '. The table also specific for each supported value of K ', the systematic index J (K'), the number H (K ') of HDPC symbols, the number S (K') of LDPC symbols, and the number W (K ') ) of LT symbols. For each value of K ', the corresponding values of S (K') and W (K ') are prime numbers. [0365] The systematic index J (K ') is designed to have the property of the set of symbol tuples font-e (d [0], a [0], b [0], c1 [0], al [0], b1 [0]), ..., (d [[K'-1], a [K'-1], b [K'-1], d1 [k'-1], a1 [K'-1] ], bl [k'-1]) are such that the L intermediate symbols are defined in a singular way, that is, niatriz A in figure 6 has the total classification and is, therefore, reversible. [0366] This remainder of this section describes the arithmetic operations that are used, those for generating encoding symbols from source symbols and for generating source symbols from encoding symbols. Mathematically, octets can be considered elements of a finite field, that is, the finite field GF (256) with 256 elements, and, thus, the addition and multiplication operations are defined. Matrix operations and symbol operations are defined based on the arithmetic operations in the octets. This allows for a complete implementation of these arithmetic operations without having to understand the underlying mathematics of the finite bodies. 5.7.2. Arithmetic Operations in Octets [0367] Octets are mapped to non-negative integers in the range 0 to 255 in the normal way: A single data octet of a symbol, B [7], B [6], B [5], B [4], B [3] , B [2], B [1], B [0], where B [7] is the highest order bit and B [0] is the lowest order bit, is mapped to the integer i = B [ 7], * 128 + B [6] * 64 + B [5] * 32 + B [4] * 16 + B [3] * 8 + B [2] * 4 + Β [1] * 2 + B [ 0]. [0368] The addition of two octets u and v defined as XOR operation, that is, u + v = u ⌃ v. Subtraction is defined in the same way, so that you also have u - v = u ⌃ v The element zero (additive identity) is the octet represented by the integer 0. The inverse of the additive of u is simply u, that is, u + u = 0. [0369] The multiplication of two octets is defined with the help of two tables OCT_EXP and OCT_LOG, which are provided in Section 5.7.3. and Section 5.7.4., respectively. The OCT_LOG Table maps the octet (instead of the zero element) to non-negative integers, and OCT_EXP maps non-negative integers to octets. For two octets u and v, it is defined: u * v = 0, if u or v are equal to 0, OCT_EXP (OCT_LOG [u] + OCT_LOG [v]) otherwise. Note that '+' on the right side of the maximum expression is the addition of a natural integer, since its arguments are normal integers. The u / v division of two octets u and v, and where v! = 0, is defined as follows: u / v = 0, if u == 0 OCT_EXP [OCT_LOG [u] - OCT_LOG [v] + 255] otherwise. [0370] An element (multiplication identity) is the octet represented by the integer 1. For an octet u that is not the zero element, that is, the multiplicative inverse of u is OCT_EXP = 255-OCT_LOG [u]] The octet denoted by alpha is the octet with the representation of integer 2. If it is a non-negative integer 0 <= 1 <256, you have: alpha ⌃⌃ 1 = OCT_EXP [i] 5.7.3. OCT_EXP table [0371] The OCT_EXP table contains 510 octets. Indexing starts at 0 and varies upwards to 509, and entries are octets with the following positive integer representation: [0372] The OCT_LOG table contains 255 non-negative integers. This table is indexed by octets interpreted as integers. The octet corresponding to the zero element, which is represented by the integer 0, is excluded as an index, so that the indexing starts at 1 and varies up to 255, and the entries are as follows: [0373] Symbol operations have the same semantics as operations on octet vectors of length T in this specification. Thus, if U and V are two symbols formed by the octets u [0], ..., u [T-1] and v [0], ..., v [T-1], respectively, the sum of the symbols U + V is defined as the sum of octets in the component sense, that is, equal to the symbol D formed by octets d [0], ..., d [T-1], so that d [i] = u [i] + v [i], 0 <= i <T Additionally, if beta is an octet, the product beta * U is defined as the symbol D obtained by multiplying each octet of U by beta, that is, d [i] = beta * u [i], 0 <= i <T 5.7.6. Matrix Operations [0374] All matrices in this specification have entries that are octets, so matrix operations and definitions are defined in terms of the underlying octet arithmetic, for example, operations on a matrix, matrix classification and matrix inversion. 5.8. Stocks for a Compliant Decoder [0375] If a RaptorQ compliant decoder receives a mathematically sufficient set of encoding symbols generated in accordance with the encoder specification in Section 5.3. to reconstruct a source block then that decoder MUST retrieve the entire source block. [0376] 1. Se o decodificador receber K' símbolos de codificação gerados de acordo com a especificação do codificador na Seção 5.3 com os ESIs correspondentes escolhidos independentemente e uniformemente de forma aleatória a partir da faixa de possíveis ESIs, então em média o decodificador falhará em recuperar todo o bloco fonte no máximo 1 dentre 10.000 vezes. 2. Se o decodificador receber K'+1 símbolos de codificação gerados de acordo com a especificação de codificador na Seção 5.3. com os ESIs correspondentes escolhidos de forma independente e uniforme aleatoriamente a partir da faixa de possíveis ESIs, então em média o decodificador falhará em recuperar todo o bloco fonte no máximo 1 a cada 10.000 vezes. 3. Se o decodificador receber K'+2 símbolos de codificação gerados de acordo com a especificação de codificador na Seção 5.3. com os ESIs correspondentes escolhidos independentemente e uniformemente de forme aleatória a partir da faixa de possíveis ESIs, então na média o decodif icador falhará em recuperar todo o bloco fonte no máximo 1 dentre 10.000 vezes. A RaptorQ-compliant decoder MUST have the following recovery properties for source blocks with K 'source symbol for all K' values in Table 2 or Section 5.6. 1. If the decoder receives K 'encoding symbols generated according to the encoder specification in Section 5.3 with the corresponding ESIs chosen independently and uniformly at random from the range of possible ESIs, then on average the decoder will fail to recover all the source block at most 1 out of 10,000 times. 2. If the decoder receives K '+ 1 encoding symbols generated according to the encoder specification in Section 5.3. with the corresponding ESIs chosen independently and uniformly randomly from the range of possible ESIs, then on average the decoder will fail to retrieve the entire source block at most 1 every 10,000 times. 3. If the decoder receives K '+ 2 encoding symbols generated according to the encoder specification in Section 5.3. with the corresponding ESIs chosen independently and uniformly at random from the range of possible ESIs, then on average the decoder will fail to retrieve the entire source block at most 1 out of 10,000 times. [0377] 1. pode reconstruir um bloco fonte desde que receba um conjunto matematicamente suficiente de símbolos de codificação gerados de acordo com a especificação de codificador na Seção 5.3; 2. preencha as propriedades de recuperação obrigatórias a partir de cima. Note that the Illustrative FEC Decoder specified in Section 5.4 meets both requirements, that is, 1. can reconstruct a source block as long as it receives a mathematically sufficient set of encoding symbols generated in accordance with the encoder specification in Section 5.3; 2. fill in the required recovery properties from above. [0378] The distribution of data can be subject to denial of service attacks by people who send corrupted packages that are accepted as legitimate by the recipients. This is particularly a concern for multicast distribution since a corrupted packet can be injected into the session near the root of the multicast tree, in which case the corrupted packet will reach many receivers. This is of particular concern when the code described in this document is used since the use of even a corrupted package containing encoding data can result in the decoding of an object that is completely corrupted and useless. It is, therefore, RECOMMENDED that source authentication and integrity checking be applied to decrypted objects before the objects are distributed to an application. For example, an SHA-1 hash [SHA1] of an object can be attached before transmission, and the SHA-1 hash is computed and verified after the object is decoded, but before it is distributed to an application. Source authentication MUST be provided, for example by including a digital signature verifiable by the computer receiver on top of the hash value. It is also RECOMMENDED that a packet authentication protocol such as TESLA [RFC4082] be used to detect and eliminate corrupted packets after arrival. This method can also be used to provide source authentication. Additionally, it is RECOMMENDED that Reverse Path Send checks be active on all network routers and switches along the route from the sender to the recipients to limit the possibility that a bad agent will successfully inject a corrupted packet into the route. multicast tree data. [0379] Another security concern is that some FEC information can be obtained by recipients out of band in a session description, and if the session description is forged or corrupted, then the recipients will not use the stream protocol for decoding the content of the received packets. To avoid these problems, it is RECOMMENDED that measures be taken to prevent recipients from accepting incorrect session descriptions, for example, by using source authentication to ensure that recipients accept only legitimate session descriptions from authorized senders. 7. IANA Considerations [0380] Values of FEC Coding IDs and FEC case IDs are subject to IANA registration. For general guidelines on IANA considerations as they are applied to this document, see [RFC5052]. This document designates the Fully Specified FED Encoding ID 6 (tbc) under ietf: rmt: fec: encoding name-space to "RaptorQ code". 8. Acknowledgments [0381] We must thank Ranganathan (Ranga) Krishnan. Ranga Krishnan has been very supportive in finding and resolving implementation details and finding systematic indexes. In addition, Habeeb Mohiuddin Mohammed and Antonios Pitarokoilis, both from the Munich University of Technology (TUM) and Alan Shinsato, perform two independent implementations of the RaptorQ encoder / decoder that helped to clarify and solve the problems with this specification. 9. References 9.1 Normative References [0382] [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels," BCP 14, RFC 2119, March 1997 [RFC4082] Perrig, A., Song, D., Canetti, R., Tygar, j., And B. Briscoe, "Timed Efficient Stream Loss-Tolerant Authentication (TESLA): Multicast Source Authentication Transform Introduction", RFC 4082, June 2005 [SHA1] "Secure Hash Standard", Federal Information Processing Standards Publication (FIPS PUB) 180-1, April 2005. [RFC5052] Watson, M. Luby, M., and L. Vicisano, "Forward Error Correction (FEC) Building Block," RFC 5052, August 2007. 9.2 Information References [0383] [RFC3453] Luby, M., Vicisano, L. Gemmell, J., Rizzo, L., Handley, M., and J. Croweroft, "The Use of Forward Error Correction (FEC) in Reliable Multicast", RFC 3453, December 2002. [RFC5053] Luby, M., Shokrollahi, A., Watson, M., and T. Stockhammer, "Raptor Forward Error Correction Scheme for Object Delivery", RFC 5053, October 2007. Inventors address Michael Luby Qualcomm Incorporated 3165 Kifer Road Santa Clara, CA 95051 USA email: luby@qualcomm.com Amin Shokrollahi EPFL Laboratoire d'algorithmique EPFL Station 14 Batiment BC Lausanne 1015 Switzerland email: amin.shokrollahi@epfl.ch Mark Watson Qualcomm Incorporated 3165 Kifer Road Santa Clara, CA, 95051 USA email: watson@qualcorom.com Thomas Stockhammer Nomor Research Brecherspitzstrasse 8 Munich 81541 Germany e-mail: stockhammer@nomor.de Lorenz Minder Qualcomm Incorporated 3165 Kifer Road Santa Clara, CA 95051 USA email: lminder@qualcomm.com
权利要求:
Claims (25) [0001] Method for transmitting data electronically through one or more transmitters capable of emitting an electronic signal, in which the data to be transmitted are represented by an ordered set of source symbols and the data is transmitted as a sequence of encoded symbols representing at least one part of the electronic signal, the method characterized by the fact that it comprises: - obtain, in an electronic readable form, the ordered set of source symbols; - generating a set of intermediate symbols from the ordered set of source symbols, in which the source symbols can be regenerated from the set of intermediate symbols; - designate sets of intermediate symbols, before transmission, such that each intermediate symbol is designated as a member of one of the sets of intermediate symbols and there is at least one first set of intermediate symbols and a second set of intermediate symbols, and in which each set of intermediate symbols has at least one intermediate symbol as members, in which the first set of intermediate symbols are designated as non-permanently inactive symbols, LT, for decoding trust propagation and the second set of intermediate symbols are designated as symbols to be permanently inactivated for decryption of trust propagation, in which symbols permanently inactivated, PI, are symbols to be solved separately from the decoding of trust propagation; and - generating a plurality of encoded symbols, in which a coded symbol is generated from one or more of the intermediate symbols, in which at least one encoded symbol is generated, directly or indirectly, from a plurality of intermediate symbols selected from a plurality of intermediate symbol sets; wherein generating the plurality of encoded symbols comprises dynamically encoding the intermediate symbols PI with a PI encoder, dynamically encoding the intermediate symbols LT with an LT encoder and combining the intermediate symbols PI and LT in a combiner to generate the plurality of encoded symbols. [0002] Method according to claim 1, characterized by the fact that each encoded symbol is generated from an exclusive OU ("XOR") of a first symbol generated from one or more of the first set of intermediate symbols and a second symbol generated from one or more of the second set of intermediate symbols in the combiner, and in which each set of intermediate symbols has associated with it different encoding parameters comprising at least different degree distributions, such that each encoded symbol is generated at from a combination of a first symbol generated from one or more of the first set of intermediate symbols with a first degree distribution and a second symbol generated from one or more of the second set of intermediate symbols with a second degree distribution different from the first degree distribution. [0003] Method according to claim 1, characterized by the fact that each encoded symbol is generated from a combination of a first symbol generated from one or more of the first set of intermediate symbols and a second symbol generated from one or more of the second set of intermediate symbols, in which the first symbol is generated using a chain reaction coding process applied to the first set of intermediate symbols, in which the second symbol is a unique OU (XOR) of a first number of symbols chosen at random from of the second set of intermediate symbols, where the first number depends on the second number equal to a number of the symbols chosen from the first set to generate the first symbol, and the combination is the XOR of the first symbol and the second symbol. [0004] Method according to claim 1, characterized in that the intermediate symbols comprise the ordered set of source symbols and a set of redundant source symbols generated from the ordered set of source symbols. [0005] Method according to claim 4, characterized by the fact that at least some of the redundant symbols are generated using GF operations [2] and other redundant symbols are generated using GF operations [256]. [0006] Method, according to claim 1, characterized by the fact that the intermediate symbols are generated, during encoding, from the source symbols using a decoding process, in which the decoding process is based on a linear set of relationships between intermediate symbols and source symbols. [0007] Method, according to claim 6, characterized by the fact that at least some of the linear relations are relations about GF [2] and other linear relations are relations about GF [256]. [0008] Method according to claim 1, characterized by the fact that the number of distinct encoded symbols that can be generated from a given ordered set of source symbols is independent of the number of source symbols in this ordered set. [0009] Method, according to claim 1, characterized by the fact that an average number of symbol operations performed to generate a coded symbol is bounded by a constant independent of the number of source symbols in this ordered set. [0010] Method for receiving data from a source, where the data is received at the destination via a packet communication channel, and where the data representable by a set of encoded symbols derived from an ordered set of source symbols represents the data sent from source to destination, the method characterized by the fact that it comprises: - obtain the set of received coded symbols; - decode a set of intermediate symbols from the set of received coded symbols; - associate each of the intermediate symbols with a set of intermediate symbols, in which the intermediate symbols are associated with at least two sets, and where a first set comprises symbols that have been designated by the source as symbols for decoding trust propagation and one second set comprises symbols that have been designated by the source as non-permanently inactive symbols, LT, for decoding reliable programming and a second set comprises symbols that are designated by the source as permanently inactive symbols, PI, to be resolved separately from the decoding of propagation of confidence for the purposes of programming a decoding process to retrieve the intermediate symbols from the received coded symbols; and - retrieving at least some of the source symbols from the ordered set of source symbols from the set of intermediate symbols according to the decoding process. [0011] Method according to claim 10, characterized by the fact that the decoding process comprises at least one first decoding phase, in which a set of reduced encoded symbols is generated which depends on the second set comprising permanently inactivated symbols and a third set of symbols comprising dynamically inactivated symbols, the third set of symbols being a subset of the first set of symbols, and a second decoding stage, in which the reduced set of encoded symbols is used to decode the second set of symbols and the third set of symbols, and a third decoding phase, in which the second decoded symbol set and the third symbol set and the received encoded symbol set are used to decode at least some of the remaining intermediate symbols that are in the first symbol set. [0012] Method, according to claim 11, characterized by the fact that the first decoding phase uses reliable propagation decoding combined with decoding inactivation, and / or the second decoding phase uses Gaussian elimination. [0013] Method according to claim 11, characterized by the fact that the third decoding phase uses return substitution or a return movement followed by a forward movement. [0014] Method, according to claim 11, characterized by the fact that the decoding process operates on the third set of symbols, considering that the number of symbols in the third set of symbols is less than 10% of the number of source and / or less symbols 10% of the number of symbols in the second set of symbols. [0015] Method according to claim 10, characterized in that the received coded symbols are operated as symbols generated by LDPC code or symbols generated by Reed-Solomon code. [0016] Method according to claim 10, characterized in that each coded symbol received from the received coded symbol set is operated as a combination of a first symbol generated from one or more symbols of the first symbol set and a second symbol generated from one or more symbols of the second set of symbols, where each received coded symbol is operated as the combination being an XOR of the first symbol and an XOR of a fixed number of symbols that was chosen at random from the second set of symbols, and in which each coded symbol received is operated as the second symbol being an XOR of a first number of symbols that was chosen randomly from the second set of symbols, and where the first number of symbols depends on the second number of symbols. symbols that was chosen from the first set of symbols to generate the first symbol. [0017] Method according to claim 16, characterized by the fact that the decoding process operates as if the first symbol was chosen based on the chain reaction code of the first set of symbols. [0018] Method according to claim 10, characterized by the fact that the decoding process operates as if the size of the second set of symbols is proportional to the square root of the number of source symbols. [0019] Method, according to claim 10, characterized by the fact that the decoding process operates as if the intermediate symbols comprise the ordered set of source symbols and a set of redundant symbols generated from the ordered set of source symbols, in which the decoding operates as if at least some of the redundant symbols were generated using GF operations [2] and other redundant symbols were generated using GF operations [256]. [0020] Method according to claim 10, characterized by the fact that the decoding process operates as if the intermediate symbols comprise the ordered set of source symbols. [0021] Method, according to claim 10, characterized by the fact that the decoding process operates as if the intermediate symbols were symbols that were generated from the source symbols using a decoding process based on a linear set of relationships between the symbols intermediates and the source symbols, in which the decoding process operates as if at least some of the linear relations were relations about GF [2] and other linear relations were relations about GF [256]. [0022] Method according to claim 10, characterized in that the decoding process operates as if the number of possible different encoded symbols that can be received was independent of the number of source symbols in the ordered set. [0023] Method according to claim 10, characterized by the fact that an average number of symbol operations performed to decode the set of source symbols from the set of encoded symbols received is delimited by a constant times the number of source symbols, in that the constant is independent of the number of source symbols. [0024] Method according to claim 10, characterized by the fact that the decoding process operates as if the number of symbols in the first set of symbols is greater than an order of magnitude greater than the number of symbols in the second set of symbols. [0025] Method according to claim 10, characterized by the fact that the decoding process operates such that the retrieval of the entire set of source symbols K from a set of encoded symbols N = K + A, for some K, N and A, has a probability of success of at least a lower limit of 1- (0.01) ⌃ (A + 1) for A = 0, 1 or 2, with the lower limit being independent of the number of source symbols.
类似技术:
公开号 | 公开日 | 专利标题 BR112012003688B1|2021-03-23|METHODS AND EQUIPMENT USING FEC CODES WITH PERMANENT DISABLING OF SYMBOLS FOR CODING AND DECODING PROCESSES US20140325237A1|2014-10-30|Physically unclonable function | with improved error correction Zhang et al.2012|An efficient embedder for BCH coding for steganography Zhou et al.2014|Systematic error-correcting codes for rank modulation US20100103001A1|2010-04-29|Methods and apparatus employing fec codes with permanent inactivation of symbols for encoding and decoding processes JP2011514743A|2011-05-06|Method and system for detecting and correcting phased burst errors, erasures, symbol errors, and bit errors in received symbol sequences Barreto et al.2011|Monoidic codes in cryptography Han et al.2014|Efficient exact regenerating codes for byzantine fault tolerance in distributed networked storage Yang et al.2019|Hierarchical coding to enable scalability and flexibility in heterogeneous cloud storage Chen et al.2014|A new Zigzag MDS code with optimal encoding and efficient decoding CN109257049B|2020-11-06|Construction method for repairing binary array code check matrix and repairing method Schindelhauer et al.2013|Maximum distance separable codes based on circulant cauchy matrices Hashemi et al.2019|A Modified McEliece Public-Key Cryptosystem Based On Irregular Codes Of QC-LDPC and QC-MDPC Schindelhauer et al.2017|Cyclone codes Sengupta et al.2018|An efficient secure distributed cloud storage for append-only data Al-Ani et al.2010|Unidirectional error correcting codes for memory systems: A comparative study CN112614558A|2021-04-06|Electronic medical record sharing method based on block chain and electronic equipment Watson et al.2011|Internet Engineering Task Force | M. Luby Request for Comments: 6330 Qualcomm Incorporated Category: Standards Track A. Shokrollahi WO2020029417A1|2020-02-13|Method for encoding and framing binary mds array code Xie et al.2013|MDS codes with low repair complexity for distributed storage networks Hou2015|Low-complexity Codes for Distributed Storage Systems Sun2010|Loss recovery via erasure coding in packet networks Wu et al.2010|DFSB: A Prototype of Distributed Storage System Based on LDPC
同族专利:
公开号 | 公开日 CA2771622A1|2011-02-24| RU2519524C2|2014-06-10| RU2012110248A|2013-09-27| US9660763B2|2017-05-23| JP5819495B2|2015-11-24| KR101421286B1|2014-07-18| CN102640422B|2015-02-11| ZA201201843B|2012-11-28| CA2771622C|2018-04-24| KR101451338B1|2014-10-15| US9419749B2|2016-08-16| HUE037486T2|2018-08-28| TWI437422B|2014-05-11| WO2011022555A3|2011-09-29| CA2982574C|2020-12-15| JP2013502849A|2013-01-24| CN102640422A|2012-08-15| US20160087755A1|2016-03-24| ES2673513T3|2018-06-22| JP2015015733A|2015-01-22| KR20130111650A|2013-10-10| CA2982574A1|2011-02-24| KR20120058556A|2012-06-07| WO2011022555A2|2011-02-24| EP2467942B1|2018-05-02| BR112012003688A2|2020-08-11| TWI566089B|2017-01-11| EP2467942A2|2012-06-27| HK1172164A1|2013-04-12| US9876607B2|2018-01-23| US20110299629A1|2011-12-08| TW201430555A|2014-08-01| US20170033892A1|2017-02-02| RU2013139729A|2015-03-10| TW201118555A|2011-06-01| RU2554556C2|2015-06-27| JP5602858B2|2014-10-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US3909721A|1972-01-31|1975-09-30|Signatron|Signal processing system| US4365338A|1980-06-27|1982-12-21|Harris Corporation|Technique for high rate digital transmission over a dynamic dispersive channel| US4965825A|1981-11-03|1990-10-23|The Personalized Mass Media Corporation|Signal processing apparatus and methods| US4589112A|1984-01-26|1986-05-13|International Business Machines Corporation|System for multiple error detection with single and double bit error correction| US4901319A|1988-03-18|1990-02-13|General Electric Company|Transmission system with adaptive interleaving| GB8815978D0|1988-07-05|1988-08-10|British Telecomm|Method & apparatus for encoding decoding & transmitting data in compressed form| US5043909A|1988-12-30|1991-08-27|Hughes Aircraft Company|Method and device for excess modulation detection for signal analysis| US5136592A|1989-06-28|1992-08-04|Digital Equipment Corporation|Error detection and correction system for long burst errors| US7594250B2|1992-04-02|2009-09-22|Debey Henry C|Method and system of program transmission optimization using a redundant transmission sequence| US5421031A|1989-08-23|1995-05-30|Delta Beta Pty. Ltd.|Program transmission optimisation| US5701582A|1989-08-23|1997-12-23|Delta Beta Pty. Ltd.|Method and apparatus for efficient transmissions of programs| US5329369A|1990-06-01|1994-07-12|Thomson Consumer Electronics, Inc.|Asymmetric picture compression| JPH0452253A|1990-06-20|1992-02-20|Kobe Steel Ltd|Rapidly solidified foil or rapidly solidified fine wire| US5455823A|1990-11-06|1995-10-03|Radio Satellite Corporation|Integrated communications terminal| US5164963A|1990-11-07|1992-11-17|At&T Bell Laboratories|Coding for digital transmission| US5465318A|1991-03-28|1995-11-07|Kurzweil Applied Intelligence, Inc.|Method for generating a speech recognition model for a non-vocabulary utterance| EP0543070A1|1991-11-21|1993-05-26|International Business Machines Corporation|Coding system and method using quaternary codes| US5379297A|1992-04-09|1995-01-03|Network Equipment Technologies, Inc.|Concurrent multi-channel segmentation and reassembly processors for asynchronous transfer mode| US5371532A|1992-05-15|1994-12-06|Bell Communications Research, Inc.|Communications architecture and method for distributing information services| US5425050A|1992-10-23|1995-06-13|Massachusetts Institute Of Technology|Television transmission system using spread spectrum and orthogonal frequency-division multiplex| US5372532A|1993-01-26|1994-12-13|Robertson, Jr.; George W.|Swivel head cap connector| EP0613249A1|1993-02-12|1994-08-31|Altera Corporation|Custom look-up table with reduced number of architecture bits| DE4316297C1|1993-05-14|1994-04-07|Fraunhofer Ges Forschung|Audio signal frequency analysis method - using window functions to provide sample signal blocks subjected to Fourier analysis to obtain respective coefficients.| AU665716B2|1993-07-05|1996-01-11|Mitsubishi Denki Kabushiki Kaisha|A transmitter for encoding error correction codes and a receiver for decoding error correction codes on a transmission frame| US5590405A|1993-10-29|1996-12-31|Lucent Technologies Inc.|Communication technique employing variable information transmission| JP2576776B2|1993-11-10|1997-01-29|日本電気株式会社|Packet transmission method and packet transmission device| US5517508A|1994-01-26|1996-05-14|Sony Corporation|Method and apparatus for detection and error correction of packetized digital data| CA2140850C|1994-02-24|1999-09-21|Howard Paul Katseff|Networked system for display of multimedia presentations| US5566208A|1994-03-17|1996-10-15|Philips Electronics North America Corp.|Encoder buffer having an effective size which varies automatically with the channel bit-rate| US5432787A|1994-03-24|1995-07-11|Loral Aerospace Corporation|Packet data transmission system with adaptive data recovery method| US5757415A|1994-05-26|1998-05-26|Sony Corporation|On-demand data transmission by dividing input data into blocks and each block into sub-blocks such that the sub-blocks are re-arranged for storage to data storage means| US5802394A|1994-06-06|1998-09-01|Starlight Networks, Inc.|Method for accessing one or more streams in a video storage system using multiple queues and maintaining continuity thereof| US5568614A|1994-07-29|1996-10-22|International Business Machines Corporation|Data streaming between peer subsystems of a computer system| US5739864A|1994-08-24|1998-04-14|Macrovision Corporation|Apparatus for inserting blanked formatted fingerprint data in to a video signal| US5668948A|1994-09-08|1997-09-16|International Business Machines Corporation|Media streamer with control node enabling same isochronous streams to appear simultaneously at output ports or different streams to appear simultaneously at output ports| US5926205A|1994-10-19|1999-07-20|Imedia Corporation|Method and apparatus for encoding and formatting data representing a video program to provide multiple overlapping presentations of the video program| US5659614A|1994-11-28|1997-08-19|Bailey, Iii; John E.|Method and system for creating and storing a backup copy of file data stored on a computer| US5617541A|1994-12-21|1997-04-01|International Computer Science Institute|System for packetizing data encoded corresponding to priority levels where reconstructed data corresponds to fractionalized priority level and received fractionalized packets| JP3614907B2|1994-12-28|2005-01-26|株式会社東芝|Data retransmission control method and data retransmission control system| JP3651699B2|1995-04-09|2005-05-25|ソニー株式会社|Decoding device and encoding / decoding device| CA2219379A1|1995-04-27|1996-10-31|Cadathur V. Chakravarthy|High integrity transport for time critical multimedia networking applications| US5835165A|1995-06-07|1998-11-10|Lsi Logic Corporation|Reduction of false locking code words in concatenated decoders| US5805825A|1995-07-26|1998-09-08|Intel Corporation|Method for semi-reliable, unidirectional broadcast information services| JP3167638B2|1995-08-04|2001-05-21|三洋電機株式会社|Digital modulation method and demodulation method, and digital modulation circuit and demodulation circuit| US6079041A|1995-08-04|2000-06-20|Sanyo Electric Co., Ltd.|Digital modulation circuit and digital demodulation circuit| US5754563A|1995-09-11|1998-05-19|Ecc Technologies, Inc.|Byte-parallel system for implementing reed-solomon error-correcting codes| KR0170298B1|1995-10-10|1999-04-15|김광호|A recording method of digital video tape| US5751336A|1995-10-12|1998-05-12|International Business Machines Corporation|Permutation based pyramid block transmission scheme for broadcasting in video-on-demand storage systems| JP3305183B2|1996-01-12|2002-07-22|株式会社東芝|Digital broadcast receiving terminal| US6012159A|1996-01-17|2000-01-04|Kencast, Inc.|Method and system for error-free data transfer| US5852565A|1996-01-30|1998-12-22|Demografx|Temporal and resolution layering in advanced television| US5936659A|1996-01-31|1999-08-10|Telcordia Technologies, Inc.|Method for video delivery using pyramid broadcasting| US5903775A|1996-06-06|1999-05-11|International Business Machines Corporation|Method for the sequential transmission of compressed video information at varying data rates| US5745504A|1996-06-25|1998-04-28|Telefonaktiebolaget Lm Ericsson|Bit error resilient variable length code| US5940863A|1996-07-26|1999-08-17|Zenith Electronics Corporation|Apparatus for de-rotating and de-interleaving data including plural memory devices and plural modulo memory address generators| US5936949A|1996-09-05|1999-08-10|Netro Corporation|Wireless ATM metropolitan area network| EP0854650A3|1997-01-17|2001-05-02|NOKIA TECHNOLOGY GmbH|Method for addressing a service in digital video broadcasting| KR100261706B1|1996-12-17|2000-07-15|가나이 쓰도무|Digital broadcasting signal receiving device and, receiving and recording/reproducing apparatus| US6141053A|1997-01-03|2000-10-31|Saukkonen; Jukka I.|Method of optimizing bandwidth for transmitting compressed video data streams| US6044485A|1997-01-03|2000-03-28|Ericsson Inc.|Transmitter method and transmission system using adaptive coding based on channel characteristics| US6011590A|1997-01-03|2000-01-04|Ncr Corporation|Method of transmitting compressed information to minimize buffer space| US5946357A|1997-01-17|1999-08-31|Telefonaktiebolaget L M Ericsson|Apparatus, and associated method, for transmitting and receiving a multi-stage, encoded and interleaved digital communication signal| US5983383A|1997-01-17|1999-11-09|Qualcom Incorporated|Method and apparatus for transmitting and receiving concatenated code data| US6014706A|1997-01-30|2000-01-11|Microsoft Corporation|Methods and apparatus for implementing control functions in a streamed video display system| EP1024672A1|1997-03-07|2000-08-02|Sanyo Electric Co., Ltd.|Digital broadcast receiver and display| US6115420A|1997-03-14|2000-09-05|Microsoft Corporation|Digital video signal encoder and encoding method| DE19716011A1|1997-04-17|1998-10-22|Abb Research Ltd|Method and device for transmitting information via power supply lines| US6226259B1|1997-04-29|2001-05-01|Canon Kabushiki Kaisha|Device and method for transmitting information device and method for processing information| US5970098A|1997-05-02|1999-10-19|Globespan Technologies, Inc.|Multilevel encoder| US5844636A|1997-05-13|1998-12-01|Hughes Electronics Corporation|Method and apparatus for receiving and recording digital packet data| EP0933768A4|1997-05-19|2000-10-04|Sanyo Electric Co|Digital modulation and digital demodulation| JP4110593B2|1997-05-19|2008-07-02|ソニー株式会社|Signal recording method and signal recording apparatus| JPH1141211A|1997-05-19|1999-02-12|Sanyo Electric Co Ltd|Digital modulatin circuit and its method, and digital demodulation circuit and its method| US6128649A|1997-06-02|2000-10-03|Nortel Networks Limited|Dynamic selection of media streams for display| US6081907A|1997-06-09|2000-06-27|Microsoft Corporation|Data delivery system and method for delivering data and redundant information over a unidirectional network| US5917852A|1997-06-11|1999-06-29|L-3 Communications Corporation|Data scrambling system and method and communications system incorporating same| KR100240869B1|1997-06-25|2000-01-15|윤종용|Data transmission method for dual diversity system| US5933056A|1997-07-15|1999-08-03|Exar Corporation|Single pole current mode common-mode feedback circuit| US6175944B1|1997-07-15|2001-01-16|Lucent Technologies Inc.|Methods and apparatus for packetizing data for transmission through an erasure broadcast channel| US6047069A|1997-07-17|2000-04-04|Hewlett-Packard Company|Method and apparatus for preserving error correction capabilities during data encryption/decryption| US6904110B2|1997-07-31|2005-06-07|Francois Trans|Channel equalization system and method| US6178536B1|1997-08-14|2001-01-23|International Business Machines Corporation|Coding scheme for file backup and systems based thereon| FR2767940A1|1997-08-29|1999-03-05|Canon Kk|CODING AND DECODING METHODS AND DEVICES AND APPARATUSES IMPLEMENTING THE SAME| EP0903955A1|1997-09-04|1999-03-24|STMicroelectronics S.r.l.|Modular architecture PET decoder for ATM networks| US6088330A|1997-09-09|2000-07-11|Bruck; Joshua|Reliable array of distributed computing nodes| US6134596A|1997-09-18|2000-10-17|Microsoft Corporation|Continuous media file server system and method for scheduling network resources to play multiple files having different data transmission rates| US6272658B1|1997-10-27|2001-08-07|Kencast, Inc.|Method and system for reliable broadcasting of data files and streams| US6081918A|1997-11-06|2000-06-27|Spielman; Daniel A.|Loss resilient code with cascading series of redundant layers| US6073250A|1997-11-06|2000-06-06|Luby; Michael G.|Loss resilient decoding technique| US6195777B1|1997-11-06|2001-02-27|Compaq Computer Corporation|Loss resilient code with double heavy tailed series of redundant layers| US6081909A|1997-11-06|2000-06-27|Digital Equipment Corporation|Irregularly graphed encoding technique| US6163870A|1997-11-06|2000-12-19|Compaq Computer Corporation|Message encoding with irregular graphing| JP3472115B2|1997-11-25|2003-12-02|Kddi株式会社|Video data transmission method and apparatus using multi-channel| US6243846B1|1997-12-12|2001-06-05|3Com Corporation|Forward error correction system for packet based data and real time media, using cross-wise parity calculation| US5870412A|1997-12-12|1999-02-09|3Com Corporation|Forward error correction system for packet based real time media| US6849803B1|1998-01-15|2005-02-01|Arlington Industries, Inc.|Electrical connector| US6097320A|1998-01-20|2000-08-01|Silicon Systems, Inc.|Encoder/decoder system with suppressed error propagation| US6226301B1|1998-02-19|2001-05-01|Nokia Mobile Phones Ltd|Method and apparatus for segmentation and assembly of data frames for retransmission in a telecommunications system| US6141788A|1998-03-13|2000-10-31|Lucent Technologies Inc.|Method and apparatus for forward error correction in packet networks| US6278716B1|1998-03-23|2001-08-21|University Of Massachusetts|Multicast with proactive forward error correction| JP2002510947A|1998-04-02|2002-04-09|サーノフコーポレイション|Burst data transmission of compressed video data| US6185265B1|1998-04-07|2001-02-06|Worldspace Management Corp.|System for time division multiplexing broadcast channels with R-1/2 or R-3/4 convolutional coding for satellite transmission via on-board baseband processing payload or transparent payload| US6067646A|1998-04-17|2000-05-23|Ameritech Corporation|Method and system for adaptive interleaving| US6018359A|1998-04-24|2000-01-25|Massachusetts Institute Of Technology|System and method for multicast video-on-demand delivery system| US6445717B1|1998-05-01|2002-09-03|Niwot Networks, Inc.|System for recovering lost information in a data stream| US6421387B1|1998-05-15|2002-07-16|North Carolina State University|Methods and systems for forward error correction based loss recovery for interactive video transmission| US6937618B1|1998-05-20|2005-08-30|Sony Corporation|Separating device and method and signal receiving device and method| US6333926B1|1998-08-11|2001-12-25|Nortel Networks Limited|Multiple user CDMA basestation modem| EP1110344A1|1998-09-04|2001-06-27|AT&T Corp.|Combined channel coding and space-block coding in a multi-antenna arrangement| US6415326B1|1998-09-15|2002-07-02|Microsoft Corporation|Timeline correlation between multiple timeline-altered media streams| US7243285B2|1998-09-23|2007-07-10|Digital Fountain, Inc.|Systems and methods for broadcasting information additive codes| US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US6320520B1|1998-09-23|2001-11-20|Digital Fountain|Information additive group code generator and decoder for communications systems| US6704370B1|1998-10-09|2004-03-09|Nortel Networks Limited|Interleaving methodology and apparatus for CDMA| IT1303735B1|1998-11-11|2001-02-23|Falorni Italia Farmaceutici S|CROSS-LINKED HYALURONIC ACIDS AND THEIR MEDICAL USES.| US6408128B1|1998-11-12|2002-06-18|Max Abecassis|Replaying with supplementary information a segment of a video| US6483736B2|1998-11-16|2002-11-19|Matrix Semiconductor, Inc.|Vertically stacked field programmable nonvolatile memory and method of fabrication| JP2000151426A|1998-11-17|2000-05-30|Toshiba Corp|Interleave and de-interleave circuit| US6166544A|1998-11-25|2000-12-26|General Electric Company|MR imaging system with interactive image contrast control| US6876623B1|1998-12-02|2005-04-05|Agere Systems Inc.|Tuning scheme for code division multiplex broadcasting system| JP3464981B2|1998-12-03|2003-11-10|フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン|Information transmitting apparatus and method, and information receiving apparatus and method| US6637031B1|1998-12-04|2003-10-21|Microsoft Corporation|Multimedia presentation latency minimization| US6496980B1|1998-12-07|2002-12-17|Intel Corporation|Method of providing replay on demand for streaming digital multimedia| US6223324B1|1999-01-05|2001-04-24|Agere Systems Guardian Corp.|Multiple program unequal error protection for digital audio broadcasting and other applications| JP3926499B2|1999-01-22|2007-06-06|株式会社日立国際電気|Convolutional code soft decision decoding receiver| US6618451B1|1999-02-13|2003-09-09|Altocom Inc|Efficient reduced state maximum likelihood sequence estimator| US6041001A|1999-02-25|2000-03-21|Lexar Media, Inc.|Method of increasing data reliability of a flash memory device without compromising compatibility| WO2000052600A1|1999-03-03|2000-09-08|Sony Corporation|Transmitter, receiver, transmitter/receiver system, transmission method and reception method| US6466698B1|1999-03-25|2002-10-15|The United States Of America As Represented By The Secretary Of The Navy|Efficient embedded image and video compression system using lifted wavelets| JP3256517B2|1999-04-06|2002-02-12|インターナショナル・ビジネス・マシーンズ・コーポレーション|Encoding circuit, circuit, parity generation method, and storage medium| US6609223B1|1999-04-06|2003-08-19|Kencast, Inc.|Method for packet-level fec encoding, in which on a source packet-by-source packet basis, the error correction contributions of a source packet to a plurality of wildcard packets are computed, and the source packet is transmitted thereafter| US6535920B1|1999-04-06|2003-03-18|Microsoft Corporation|Analyzing, indexing and seeking of streaming information| US6804202B1|1999-04-08|2004-10-12|Lg Information And Communications, Ltd.|Radio protocol for mobile communication system and method| US7885340B2|1999-04-27|2011-02-08|Realnetworks, Inc.|System and method for generating multiple synchronized encoded representations of media data| FI113124B|1999-04-29|2004-02-27|Nokia Corp|Communication| MY130203A|1999-05-06|2007-06-29|Sony Corp|Methods and apparatus for data processing, methods and apparatus for data reproducing and recording media| KR100416996B1|1999-05-10|2004-02-05|삼성전자주식회사|Variable-length data transmitting and receiving apparatus in accordance with radio link protocol for a mobile telecommunication system and method thereof| US6154452A|1999-05-26|2000-11-28|Xm Satellite Radio Inc.|Method and apparatus for continuous cross-channel interleaving| US6229824B1|1999-05-26|2001-05-08|Xm Satellite Radio Inc.|Method and apparatus for concatenated convolutional endcoding and interleaving| AU5140200A|1999-05-26|2000-12-18|Enounce, Incorporated|Method and apparatus for controlling time-scale modification during multi-media broadcasts| JP2000353969A|1999-06-11|2000-12-19|Sony Corp|Receiver for digital voice broadcasting| US6577599B1|1999-06-30|2003-06-10|Sun Microsystems, Inc.|Small-scale reliable multicasting| US20050160272A1|1999-10-28|2005-07-21|Timecertain, Llc|System and method for providing trusted time in content of digital data files| IL141800D0|1999-07-06|2002-03-10|Samsung Electronics Co Ltd|Rate matching device and method for a data communication system| US6643332B1|1999-07-09|2003-11-04|Lsi Logic Corporation|Method and apparatus for multi-level coding of digital signals| US6279072B1|1999-07-22|2001-08-21|Micron Technology, Inc.|Reconfigurable memory with selectable error correction storage| JP3451221B2|1999-07-22|2003-09-29|日本無線株式会社|Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium| US6453440B1|1999-08-04|2002-09-17|Sun Microsystems, Inc.|System and method for detecting double-bit errors and for correcting errors due to component failures| JP2001060934A|1999-08-20|2001-03-06|Matsushita Electric Ind Co Ltd|Ofdm communication equipment| US6430233B1|1999-08-30|2002-08-06|Hughes Electronics Corporation|Single-LNB satellite data receiver| US6332163B1|1999-09-01|2001-12-18|Accenture, Llp|Method for providing communication services over a computer network system| JP4284774B2|1999-09-07|2009-06-24|ソニー株式会社|Transmission device, reception device, communication system, transmission method, and communication method| JP2001094625A|1999-09-27|2001-04-06|Canon Inc|Data communication unit, data communication method and storage medium| WO2001024474A1|1999-09-27|2001-04-05|Koninklijke Philips Electronics N.V.|Partitioning of file for emulating streaming| US7529806B1|1999-11-04|2009-05-05|Koninklijke Philips Electronics N.V.|Partitioning of MP3 content file for emulating streaming| US6523147B1|1999-11-11|2003-02-18|Ibiquity Digital Corporation|Method and apparatus for forward error correction coding for an AM in-band on-channel digital audio broadcasting system| US6785323B1|1999-11-22|2004-08-31|Ipr Licensing, Inc.|Variable rate coding for forward link| US6678855B1|1999-12-02|2004-01-13|Microsoft Corporation|Selecting K in a data transmission carousel using forward error correction| US6748441B1|1999-12-02|2004-06-08|Microsoft Corporation|Data carousel receiving and caching| US6798791B1|1999-12-16|2004-09-28|Agere Systems Inc|Cluster frame synchronization scheme for a satellite digital audio radio system| US6487692B1|1999-12-21|2002-11-26|Lsi Logic Corporation|Reed-Solomon decoder| US20020009137A1|2000-02-01|2002-01-24|Nelson John E.|Three-dimensional video broadcasting system| US6965636B1|2000-02-01|2005-11-15|2Wire, Inc.|System and method for block error correction in packet-based digital communications| WO2001057667A1|2000-02-03|2001-08-09|Bandwiz, Inc.|Data streaming| IL140504D0|2000-02-03|2002-02-10|Bandwiz Inc|Broadcast system| US7304990B2|2000-02-03|2007-12-04|Bandwiz Inc.|Method of encoding and transmitting data over a communication medium through division and segmentation| JP2001251287A|2000-02-24|2001-09-14|Geneticware Corp Ltd|Confidential transmitting method using hardware protection inside secret key and variable pass code| DE10009443A1|2000-02-29|2001-08-30|Philips Corp Intellectual Pty|Receiver and method for detecting and decoding a DQPSK-modulated and channel-coded received signal| US6765866B1|2000-02-29|2004-07-20|Mosaid Technologies, Inc.|Link aggregation| US6384750B1|2000-03-23|2002-05-07|Mosaid Technologies, Inc.|Multi-stage lookup for translating between signals of different bit lengths| JP2001274776A|2000-03-24|2001-10-05|Toshiba Corp|Information data transmission system and its transmitter and receiver| US6510177B1|2000-03-24|2003-01-21|Microsoft Corporation|System and method for layered video coding enhancement| US6851086B2|2000-03-31|2005-02-01|Ted Szymanski|Transmitter, receiver, and coding scheme to increase data rate and decrease bit error rate of an optical data link| US6473010B1|2000-04-04|2002-10-29|Marvell International, Ltd.|Method and apparatus for determining error correction code failure rate for iterative decoding algorithms| US8572646B2|2000-04-07|2013-10-29|Visible World Inc.|System and method for simultaneous broadcast for personalized messages| EP1273152B1|2000-04-08|2006-08-02|Sun Microsystems, Inc.|Method of streaming a single media track to multiple clients| US6631172B1|2000-05-01|2003-10-07|Lucent Technologies Inc.|Efficient list decoding of Reed-Solomon codes for message recovery in the presence of high noise levels| US6742154B1|2000-05-25|2004-05-25|Ciena Corporation|Forward error correction codes for digital optical network optimization| US6694476B1|2000-06-02|2004-02-17|Vitesse Semiconductor Corporation|Reed-solomon encoder and decoder| US6738942B1|2000-06-02|2004-05-18|Vitesse Semiconductor Corporation|Product code based forward error correction system| GB2366159B|2000-08-10|2003-10-08|Mitel Corp|Combination reed-solomon and turbo coding| US6834342B2|2000-08-16|2004-12-21|Eecad, Inc.|Method and system for secure communication over unstable public connections| KR100447162B1|2000-08-19|2004-09-04|엘지전자 주식회사|Method for length indicator inserting in protocol data unit of radio link control| JP2002073625A|2000-08-24|2002-03-12|Nippon Hoso Kyokai <Nhk>|Method server and medium for providing information synchronously with broadcast program| US7340664B2|2000-09-20|2008-03-04|Lsi Logic Corporation|Single engine turbo decoder with single frame size buffer for interleaving/deinterleaving| US6486803B1|2000-09-22|2002-11-26|Digital Fountain, Inc.|On demand encoding with a window| US7151754B1|2000-09-22|2006-12-19|Lucent Technologies Inc.|Complete user datagram protocol for wireless multimedia packet networks using improved packet level forward error correction coding| US7031257B1|2000-09-22|2006-04-18|Lucent Technologies Inc.|Radio link protocol /point-to-point protocol design that passes corrupted data and error location information among layers in a wireless data transmission protocol| US7490344B2|2000-09-29|2009-02-10|Visible World, Inc.|System and method for seamless switching| US6411223B1|2000-10-18|2002-06-25|Digital Fountain, Inc.|Generating high weight encoding symbols using a basis| US7613183B1|2000-10-31|2009-11-03|Foundry Networks, Inc.|System and method for router data aggregation and delivery| US6694478B1|2000-11-07|2004-02-17|Agere Systems Inc.|Low delay channel codes for correcting bursts of lost packets| US6732325B1|2000-11-08|2004-05-04|Digeo, Inc.|Error-correction with limited working storage| US20020133247A1|2000-11-11|2002-09-19|Smith Robert D.|System and method for seamlessly switching between media streams| US7072971B2|2000-11-13|2006-07-04|Digital Foundation, Inc.|Scheduling of multiple files for serving on a server| US7240358B2|2000-12-08|2007-07-03|Digital Fountain, Inc.|Methods and apparatus for scheduling, serving, receiving media-on demand for clients, servers arranged according to constraints on resources| AT464740T|2000-12-15|2010-04-15|British Telecomm|TRANSFER OF SOUND AND / OR PICTURE MATERIAL| AU2092702A|2000-12-15|2002-06-24|British Telecomm|Transmission and reception of audio and/or video material| US6850736B2|2000-12-21|2005-02-01|Tropian, Inc.|Method and apparatus for reception quality indication in wireless communication| US7143433B1|2000-12-27|2006-11-28|Infovalve Computing Inc.|Video distribution system using dynamic segmenting of video data files| US20020085013A1|2000-12-29|2002-07-04|Lippincott Louis A.|Scan synchronized dual frame buffer graphics subsystem| NO315887B1|2001-01-04|2003-11-03|Fast Search & Transfer As|Procedures for transmitting and socking video information| US20080059532A1|2001-01-18|2008-03-06|Kazmi Syed N|Method and system for managing digital content, including streaming media| DE10103387A1|2001-01-26|2002-08-01|Thorsten Nordhoff|Wind power plant with a device for obstacle lighting or night marking| FI118830B|2001-02-08|2008-03-31|Nokia Corp|Streaming playback| US6868083B2|2001-02-16|2005-03-15|Hewlett-Packard Development Company, L.P.|Method and system for packet communication employing path diversity| US20020129159A1|2001-03-09|2002-09-12|Michael Luby|Multi-output packet server with independent streams| US6618541B2|2001-03-14|2003-09-09|Zygo Corporation|Fiber array fabrication| KR100464360B1|2001-03-30|2005-01-03|삼성전자주식회사|Apparatus and method for efficiently energy distributing over packet data channel in mobile communication system for high rate packet transmission| TWI246841B|2001-04-22|2006-01-01|Koninkl Philips Electronics Nv|Digital transmission system and method for transmitting digital signals| US20020143953A1|2001-04-03|2002-10-03|International Business Machines Corporation|Automatic affinity within networks performing workload balancing| US6785836B2|2001-04-11|2004-08-31|Broadcom Corporation|In-place data transformation for fault-tolerant disk storage systems| US6820221B2|2001-04-13|2004-11-16|Hewlett-Packard Development Company, L.P.|System and method for detecting process and network failures in a distributed system| US7010052B2|2001-04-16|2006-03-07|The Ohio University|Apparatus and method of CTCM encoding and decoding for a digital communication system| US7035468B2|2001-04-20|2006-04-25|Front Porch Digital Inc.|Methods and apparatus for archiving, indexing and accessing audio and video data| US20020191116A1|2001-04-24|2002-12-19|Damien Kessler|System and data format for providing seamless stream switching in a digital video recorder| US6497479B1|2001-04-27|2002-12-24|Hewlett-Packard Company|Higher organic inks with good reliability and drytime| US7962482B2|2001-05-16|2011-06-14|Pandora Media, Inc.|Methods and systems for utilizing contextual feedback to generate and modify playlists| US6633856B2|2001-06-15|2003-10-14|Flarion Technologies, Inc.|Methods and apparatus for decoding LDPC codes| US7076478B2|2001-06-26|2006-07-11|Microsoft Corporation|Wrapper playlists on streaming media services| US6745364B2|2001-06-28|2004-06-01|Microsoft Corporation|Negotiated/dynamic error correction for streamed media| JP2003018568A|2001-06-29|2003-01-17|Matsushita Electric Ind Co Ltd|Reproducing system, server apparatus and reproducer| US6895547B2|2001-07-11|2005-05-17|International Business Machines Corporation|Method and apparatus for low density parity check encoding of data| US6928603B1|2001-07-19|2005-08-09|Adaptix, Inc.|System and method for interference mitigation using adaptive forward error correction in a wireless RF data transmission system| US6961890B2|2001-08-16|2005-11-01|Hewlett-Packard Development Company, L.P.|Dynamic variable-length error correction code| US7110412B2|2001-09-18|2006-09-19|Sbc Technology Resources, Inc.|Method and system to transport high-quality video signals| FI115418B|2001-09-20|2005-04-29|Oplayo Oy|Adaptive media stream| US6990624B2|2001-10-12|2006-01-24|Agere Systems Inc.|High speed syndrome-based FEC encoder and decoder and system using same| US7480703B2|2001-11-09|2009-01-20|Sony Corporation|System, method, and computer program product for remotely determining the configuration of a multi-media content user based on response of the user| US7363354B2|2001-11-29|2008-04-22|Nokia Corporation|System and method for identifying and accessing network services| US7003712B2|2001-11-29|2006-02-21|Emin Martinian|Apparatus and method for adaptive, multimode decoding| JP2003174489A|2001-12-05|2003-06-20|Ntt Docomo Inc|Streaming distribution device and streaming distribution method| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| KR100959573B1|2002-01-23|2010-05-27|노키아 코포레이션|Grouping of image frames in video coding| FI114527B|2002-01-23|2004-10-29|Nokia Corp|Grouping of picture frames in video encoding| CN1625880B|2002-01-30|2010-08-11|Nxp股份有限公司|Streaming multimedia data over a network having a variable bandwith| WO2003071440A1|2002-02-15|2003-08-28|Digital Fountain, Inc.|System and method for reliably communicating the content of a live data stream| JP4126928B2|2002-02-28|2008-07-30|日本電気株式会社|Proxy server and proxy control program| JP4116470B2|2002-03-06|2008-07-09|ヒューレット・パッカード・カンパニー|Media streaming distribution system| FR2837332A1|2002-03-15|2003-09-19|Thomson Licensing Sa|DEVICE AND METHOD FOR INSERTING ERROR CORRECTION AND RECONSTITUTION CODES OF DATA STREAMS, AND CORRESPONDING PRODUCTS| MXPA04010058A|2002-04-15|2004-12-13|Nokia Corp|Rlp logical layer of a communication station.| US6677864B2|2002-04-18|2004-01-13|Telefonaktiebolaget L.M. Ericsson|Method for multicast over wireless networks| JP3689063B2|2002-04-19|2005-08-31|松下電器産業株式会社|Data receiving apparatus and data distribution system| JP3629008B2|2002-04-19|2005-03-16|松下電器産業株式会社|Data receiving apparatus and data distribution system| WO2003092305A1|2002-04-25|2003-11-06|Sharp Kabushiki Kaisha|Image encodder, image decoder, record medium, and image recorder| US20030204602A1|2002-04-26|2003-10-30|Hudson Michael D.|Mediated multi-source peer content delivery network architecture| US7177658B2|2002-05-06|2007-02-13|Qualcomm, Incorporated|Multi-media broadcast and multicast service in a wireless communications system| US7200388B2|2002-05-31|2007-04-03|Nokia Corporation|Fragmented delivery of multimedia| US9419749B2|2009-08-19|2016-08-16|Qualcomm Incorporated|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| WO2003105484A1|2002-06-11|2003-12-18|Telefonaktiebolaget L M Ericsson |Generation of mixed media streams| US9240810B2|2002-06-11|2016-01-19|Digital Fountain, Inc.|Systems and processes for decoding chain reaction codes through inactivation| ES2445761T3|2002-06-11|2014-03-05|Digital Fountain, Inc.|Decoding chain reaction codes by inactivation| US6956875B2|2002-06-19|2005-10-18|Atlinks Usa, Inc.|Technique for communicating variable bit rate data over a constant bit rate link| JP4154569B2|2002-07-10|2008-09-24|日本電気株式会社|Image compression / decompression device| JP4120461B2|2002-07-12|2008-07-16|住友電気工業株式会社|Transmission data generation method and transmission data generation apparatus| KR100754419B1|2002-07-16|2007-08-31|노키아 코포레이션|A method for random access and gradual picture refresh in video coding| US7664126B2|2002-07-31|2010-02-16|Sharp Kabushiki Kaisha|Data communication apparatus, intermittent communication method therefor, program describing the method and recording medium for recording the program| JP2004070712A|2002-08-07|2004-03-04|Nippon Telegr & Teleph Corp <Ntt>|Data delivery method, data delivery system, split delivery data receiving method, split delivery data receiving device and split delivery data receiving program| AU2002319335B2|2002-08-13|2008-12-04|Nokia Corporation|Symbol interleaving| US6985459B2|2002-08-21|2006-01-10|Qualcomm Incorporated|Early transmission and playout of packets in wireless communication systems| WO2004030273A1|2002-09-27|2004-04-08|Fujitsu Limited|Data delivery method, system, transfer method, and program| JP3534742B1|2002-10-03|2004-06-07|株式会社エヌ・ティ・ティ・ドコモ|Moving picture decoding method, moving picture decoding apparatus, and moving picture decoding program| AU2003277198A1|2002-10-05|2004-05-04|Digital Fountain, Inc.|Systematic encoding and decoding of chain reaction codes| JP2004135013A|2002-10-10|2004-04-30|Matsushita Electric Ind Co Ltd|Device and method for transmission| FI116816B|2002-10-14|2006-02-28|Nokia Corp|Streaming media| US7289451B2|2002-10-25|2007-10-30|Telefonaktiebolaget Lm Ericsson |Delay trading between communication links| US8320301B2|2002-10-25|2012-11-27|Qualcomm Incorporated|MIMO WLAN system| WO2004040831A1|2002-10-30|2004-05-13|Koninklijke Philips Electronics N.V.|Adaptative forward error control scheme| JP2004165922A|2002-11-12|2004-06-10|Sony Corp|Apparatus, method, and program for information processing| GB0226872D0|2002-11-18|2002-12-24|British Telecomm|Video transmission| WO2004047455A1|2002-11-18|2004-06-03|British Telecommunications Public Limited Company|Transmission of video| KR100502609B1|2002-11-21|2005-07-20|한국전자통신연구원|Encoder using low density parity check code and encoding method thereof| US7086718B2|2002-11-23|2006-08-08|Silverbrook Research Pty Ltd|Thermal ink jet printhead with high nozzle areal density| JP2004192140A|2002-12-09|2004-07-08|Sony Corp|Data communication system, data transmitting device, data receiving device and method, and computer program| JP2004193992A|2002-12-11|2004-07-08|Sony Corp|Information processing system, information processor, information processing method, recording medium and program| US8135073B2|2002-12-19|2012-03-13|Trident Microsystems Ltd|Enhancing video images depending on prior image enhancements| US7164882B2|2002-12-24|2007-01-16|Poltorak Alexander I|Apparatus and method for facilitating a purchase using information provided on a media playing device| WO2004068715A2|2003-01-29|2004-08-12|Digital Fountain, Inc.|Systems and processes for fast encoding of hamming codes| US7525994B2|2003-01-30|2009-04-28|Avaya Inc.|Packet data flow identification for multiplexing| US7756002B2|2003-01-30|2010-07-13|Texas Instruments Incorporated|Time-frequency interleaved orthogonal frequency division multiplexing ultra wide band physical layer| US7231404B2|2003-01-31|2007-06-12|Nokia Corporation|Datacast file transmission with meta-data retention| US7062272B2|2003-02-18|2006-06-13|Qualcomm Incorporated|Method and apparatus to track count of broadcast content recipients in a wireless telephone network| EP1455504B1|2003-03-07|2014-11-12|Samsung Electronics Co., Ltd.|Apparatus and method for processing audio signal and computer readable recording medium storing computer program for the method| JP4173755B2|2003-03-24|2008-10-29|富士通株式会社|Data transmission server| US7610487B2|2003-03-27|2009-10-27|Microsoft Corporation|Human input security codes| US7266147B2|2003-03-31|2007-09-04|Sharp Laboratories Of America, Inc.|Hypothetical reference decoder| US7408486B2|2003-04-21|2008-08-05|Qbit Corporation|System and method for using a microlet-based modem| JP2004343701A|2003-04-21|2004-12-02|Matsushita Electric Ind Co Ltd|Data receiving reproduction apparatus, data receiving reproduction method, and data receiving reproduction processing program| US20050041736A1|2003-05-07|2005-02-24|Bernie Butler-Smith|Stereoscopic television signal processing method, transmission system and viewer enhancements| KR100492567B1|2003-05-13|2005-06-03|엘지전자 주식회사|Http-based video streaming apparatus and method for a mobile communication system| US7113773B2|2003-05-16|2006-09-26|Qualcomm Incorporated|Reliable reception of broadcast/multicast content| JP2004348824A|2003-05-21|2004-12-09|Toshiba Corp|Ecc encoding method and ecc encoding device| US7483525B2|2003-05-23|2009-01-27|Navin Chaddha|Method and system for selecting a communication channel with a recipient device over a communication network| JP2004362099A|2003-06-03|2004-12-24|Sony Corp|Server device, information processor, information processing method, and computer program| MXPA05013237A|2003-06-07|2006-03-09|Samsung Electronics Co Ltd|Apparatus and method for organization and interpretation of multimedia data on a recording medium.| KR101003413B1|2003-06-12|2010-12-23|엘지전자 주식회사|Method for compression/decompression the transferring data of mobile phone| US7603689B2|2003-06-13|2009-10-13|Microsoft Corporation|Fast start-up for digital video streams| RU2265960C2|2003-06-16|2005-12-10|Федеральное государственное унитарное предприятие "Калужский научно-исследовательский институт телемеханических устройств"|Method for transferring information with use of adaptive alternation| US7391717B2|2003-06-30|2008-06-24|Microsoft Corporation|Streaming of variable bit rate multimedia content| US20050004997A1|2003-07-01|2005-01-06|Nokia Corporation|Progressive downloading of timed multimedia content| US8149939B2|2003-07-07|2012-04-03|Samsung Electronics Co., Ltd.|System of robust DTV signal transmissions that legacy DTV receivers will disregard| US7254754B2|2003-07-14|2007-08-07|International Business Machines Corporation|Raid 3+3| KR100532450B1|2003-07-16|2005-11-30|삼성전자주식회사|Data recording method with robustness for errors, data reproducing method therefore, and apparatuses therefore| US20050028067A1|2003-07-31|2005-02-03|Weirauch Charles R.|Data with multiple sets of error correction codes| US8694869B2|2003-08-21|2014-04-08|QUALCIMM Incorporated|Methods for forward error correction coding above a radio link control layer and related apparatus| CN1868157B|2003-08-21|2011-07-27|高通股份有限公司|Methods for forward error correction coding above a radio link control layer and related apparatus| IL157885D0|2003-09-11|2004-03-28|Bamboo Mediacasting Ltd|Iterative forward error correction| IL157886D0|2003-09-11|2009-02-11|Bamboo Mediacasting Ltd|Secure multicast transmission| JP4183586B2|2003-09-12|2008-11-19|三洋電機株式会社|Video display device| JP4988346B2|2003-09-15|2012-08-01|ザ・ディレクティービー・グループ・インコーポレイテッド|Method and system for adaptive transcoding and rate conversion in video networks| KR100608715B1|2003-09-27|2006-08-04|엘지전자 주식회사|SYSTEM AND METHOD FOR QoS-QUARANTED MULTIMEDIA STREAMING SERVICE| EP1521373B1|2003-09-30|2006-08-23|Telefonaktiebolaget LM Ericsson |In-place data deinterleaving| US7559004B1|2003-10-01|2009-07-07|Sandisk Corporation|Dynamic redundant area configuration in a non-volatile memory system| EP2722995A3|2003-10-06|2018-01-17|Digital Fountain, Inc.|Soft-decision decoding of multi-stage chain reaction codes| US7516232B2|2003-10-10|2009-04-07|Microsoft Corporation|Media organization for distributed sending of media data| US7614071B2|2003-10-10|2009-11-03|Microsoft Corporation|Architecture for distributed sending of media data| CN100555213C|2003-10-14|2009-10-28|松下电器产业株式会社|Data converter| US7650036B2|2003-10-16|2010-01-19|Sharp Laboratories Of America, Inc.|System and method for three-dimensional video coding| US7168030B2|2003-10-17|2007-01-23|Telefonaktiebolaget Lm Ericsson |Turbo code decoder with parity information update| US8132215B2|2003-10-27|2012-03-06|Panasonic Corporation|Apparatus for receiving broadcast signal| JP2005136546A|2003-10-29|2005-05-26|Sony Corp|Transmission apparatus and method, recording medium, and program| EP1528702B1|2003-11-03|2008-01-23|Broadcom Corporation|FEC decoding with dynamic parameters| US20050102371A1|2003-11-07|2005-05-12|Emre Aksu|Streaming from a server to a client| WO2005055016A2|2003-12-01|2005-06-16|Digital Fountain, Inc.|Protection of data from erasures using subsymbol based codes| US7428669B2|2003-12-07|2008-09-23|Adaptive Spectrum And Signal Alignment, Inc.|Adaptive FEC codeword management| US7574706B2|2003-12-15|2009-08-11|Microsoft Corporation|System and method for managing and communicating software updates| US7590118B2|2003-12-23|2009-09-15|Agere Systems Inc.|Frame aggregation format| JP4536383B2|2004-01-16|2010-09-01|株式会社エヌ・ティ・ティ・ドコモ|Data receiving apparatus and data receiving method| KR100770902B1|2004-01-20|2007-10-26|삼성전자주식회사|Apparatus and method for generating and decoding forward error correction codes of variable rate by using high rate data wireless communication| KR100834750B1|2004-01-29|2008-06-05|삼성전자주식회사|Appartus and method for Scalable video coding providing scalability in encoder part| JP4321284B2|2004-02-03|2009-08-26|株式会社デンソー|Streaming data transmission apparatus and information distribution system| US7599294B2|2004-02-13|2009-10-06|Nokia Corporation|Identification and re-transmission of missing parts| KR100596705B1|2004-03-04|2006-07-04|삼성전자주식회사|Method and system for video coding for video streaming service, and method and system for video decoding| KR100586883B1|2004-03-04|2006-06-08|삼성전자주식회사|Method and apparatus for video coding, pre-decoding, video decoding for vidoe streaming service, and method for image filtering| US7609653B2|2004-03-08|2009-10-27|Microsoft Corporation|Resolving partial media topologies| US20050207392A1|2004-03-19|2005-09-22|Telefonaktiebolaget Lm Ericsson |Higher layer packet framing using RLP| US7240236B2|2004-03-23|2007-07-03|Archivas, Inc.|Fixed content distributed data storage using permutation ring encoding| US7930184B2|2004-08-04|2011-04-19|Dts, Inc.|Multi-channel audio coding/decoding of random access points and transients| JP4433287B2|2004-03-25|2010-03-17|ソニー株式会社|Receiving apparatus and method, and program| US8842175B2|2004-03-26|2014-09-23|Broadcom Corporation|Anticipatory video signal reception and processing| US20050216472A1|2004-03-29|2005-09-29|David Leon|Efficient multicast/broadcast distribution of formatted data| KR20070007810A|2004-03-30|2007-01-16|코닌클리케 필립스 일렉트로닉스 엔.브이.|System and method for supporting improved trick mode performance for disc-based multimedia content| TW200534875A|2004-04-23|2005-11-01|Lonza Ag|Personal care compositions and concentrates for making the same| FR2869744A1|2004-04-29|2005-11-04|Thomson Licensing Sa|METHOD FOR TRANSMITTING DIGITAL DATA PACKETS AND APPARATUS IMPLEMENTING THE METHOD| US7633970B2|2004-05-07|2009-12-15|Agere Systems Inc.|MAC header compression for use with frame aggregation| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US20050254575A1|2004-05-12|2005-11-17|Nokia Corporation|Multiple interoperability points for scalable media coding and transmission| US20060037057A1|2004-05-24|2006-02-16|Sharp Laboratories Of America, Inc.|Method and system of enabling trick play modes using HTTP GET| US8331445B2|2004-06-01|2012-12-11|Qualcomm Incorporated|Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques| US20070110074A1|2004-06-04|2007-05-17|Bob Bradley|System and Method for Synchronizing Media Presentation at Multiple Recipients| US7139660B2|2004-07-14|2006-11-21|General Motors Corporation|System and method for changing motor vehicle personalization settings| US8112531B2|2004-07-14|2012-02-07|Nokia Corporation|Grouping of session objects| US8544043B2|2004-07-21|2013-09-24|Qualcomm Incorporated|Methods and apparatus for providing content information to content servers| US7409626B1|2004-07-28|2008-08-05|Ikanos Communications Inc|Method and apparatus for determining codeword interleaver parameters| US7376150B2|2004-07-30|2008-05-20|Nokia Corporation|Point-to-point repair response mechanism for point-to-multipoint transmission systems| US7590922B2|2004-07-30|2009-09-15|Nokia Corporation|Point-to-point repair request mechanism for point-to-multipoint transmission systems| WO2006020826A2|2004-08-11|2006-02-23|Digital Fountain, Inc.|Method and apparatus for fast encoding of data symbols according to half-weight codes| JP4405875B2|2004-08-25|2010-01-27|富士通株式会社|Method and apparatus for generating data for error correction, generation program, and computer-readable recording medium storing the program| JP2006074335A|2004-09-01|2006-03-16|Nippon Telegr & Teleph Corp <Ntt>|Transmission method, transmission system, and transmitter| JP4576936B2|2004-09-02|2010-11-10|ソニー株式会社|Information processing apparatus, information recording medium, content management system, data processing method, and computer program| US7660245B1|2004-09-16|2010-02-09|Qualcomm Incorporated|FEC architecture for streaming services including symbol-based operations and packet tagging| JP2006115104A|2004-10-13|2006-04-27|Daiichikosho Co Ltd|Method and device for packetizing time-series information encoded with high efficiency, and performing real-time streaming transmission, and for reception and reproduction| US7529984B2|2004-11-16|2009-05-05|Infineon Technologies Ag|Seamless change of depth of a general convolutional interleaver during transmission without loss of data| US7751324B2|2004-11-19|2010-07-06|Nokia Corporation|Packet stream arrangement in multimedia transmission| US20080196061A1|2004-11-22|2008-08-14|Boyce Jill Macdonald|Method and Apparatus for Channel Change in Dsl System| JP5425397B2|2004-12-02|2014-02-26|トムソンライセンシング|Apparatus and method for adaptive forward error correction| KR20060065482A|2004-12-10|2006-06-14|마이크로소프트 코포레이션|A system and process for controlling the coding bit rate of streaming media data| JP2006174045A|2004-12-15|2006-06-29|Ntt Communications Kk|Image distribution device, program, and method therefor| JP2006174032A|2004-12-15|2006-06-29|Sanyo Electric Co Ltd|Image data transmission system, image data receiver and image data transmitter| US7398454B2|2004-12-21|2008-07-08|Tyco Telecommunications Inc.|System and method for forward error correction decoding using soft information| JP4391409B2|2004-12-24|2009-12-24|株式会社第一興商|High-efficiency-encoded time-series information transmission method and apparatus for real-time streaming transmission and reception| WO2006084503A1|2005-02-08|2006-08-17|Telefonaktiebolaget Lm Ericsson |On-demand multi-channel streaming session over packet-switched networks| US7925097B2|2005-02-18|2011-04-12|Sanyo Electric Co., Ltd.|Image display method, image coding apparatus, and image decoding apparatus| US7822139B2|2005-03-02|2010-10-26|Rohde & Schwarz Gmbh & Co. Kg|Apparatus, systems, methods and computer products for providing a virtual enhanced training sequence| WO2006096104A1|2005-03-07|2006-09-14|Telefonaktiebolaget Lm Ericsson |Multimedia channel switching| US8028322B2|2005-03-14|2011-09-27|Time Warner Cable Inc.|Method and apparatus for network content download and recording| US7418649B2|2005-03-15|2008-08-26|Microsoft Corporation|Efficient implementation of reed-solomon erasure resilient codes in high-rate applications| US7219289B2|2005-03-15|2007-05-15|Tandberg Data Corporation|Multiply redundant raid system and XOR-efficient method and apparatus for implementing the same| US7450064B2|2005-03-22|2008-11-11|Qualcomm, Incorporated|Methods and systems for deriving seed position of a subscriber station in support of unassisted GPS-type position determination in a wireless communication system| JP4487028B2|2005-03-31|2010-06-23|ブラザー工業株式会社|Delivery speed control device, delivery system, delivery speed control method, and delivery speed control program| US7715842B2|2005-04-09|2010-05-11|Lg Electronics Inc.|Supporting handover of mobile terminal| WO2006108917A1|2005-04-13|2006-10-19|Nokia Corporation|Coding, storage and signalling of scalability information| JP4515319B2|2005-04-27|2010-07-28|株式会社日立製作所|Computer system| US7961700B2|2005-04-28|2011-06-14|Qualcomm Incorporated|Multi-carrier operation in data transmission systems| JP2006319743A|2005-05-13|2006-11-24|Toshiba Corp|Receiving device| CA2562212C|2005-10-05|2012-07-10|Lg Electronics Inc.|Method of processing traffic information and digital broadcast system| US8228994B2|2005-05-20|2012-07-24|Microsoft Corporation|Multi-view video coding based on temporal and view decomposition| JP2008543142A|2005-05-24|2008-11-27|ノキアコーポレイション|Method and apparatus for hierarchical transmission and reception in digital broadcasting| US9432433B2|2006-06-09|2016-08-30|Qualcomm Incorporated|Enhanced block-request streaming system using signaling or block creation| US7644335B2|2005-06-10|2010-01-05|Qualcomm Incorporated|In-place transformations with applications to encoding and decoding various classes of codes| US9209934B2|2006-06-09|2015-12-08|Qualcomm Incorporated|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction| US7676735B2|2005-06-10|2010-03-09|Digital Fountain Inc.|Forward error-correcting coding and streaming| US9380096B2|2006-06-09|2016-06-28|Qualcomm Incorporated|Enhanced block-request streaming system for handling low-latency streaming| US9178535B2|2006-06-09|2015-11-03|Digital Fountain, Inc.|Dynamic stream interleaving and sub-stream based delivery| US9386064B2|2006-06-09|2016-07-05|Qualcomm Incorporated|Enhanced block-request streaming using URL templates and construction rules| JP2007013436A|2005-06-29|2007-01-18|Toshiba Corp|Coding stream reproducing apparatus| US20070006274A1|2005-06-30|2007-01-04|Toni Paila|Transmission and reception of session packets| JP2007013675A|2005-06-30|2007-01-18|Sanyo Electric Co Ltd|Streaming distribution system and server| US7725593B2|2005-07-15|2010-05-25|Sony Corporation|Scalable video coding file format| US20070022215A1|2005-07-19|2007-01-25|Singer David W|Method and apparatus for media data transmission| AT514246T|2005-08-19|2011-07-15|Hewlett Packard Development Co|STATEMENT OF LOST SEGMENTS ON LAYER LIMITS| CN101053249B|2005-09-09|2011-02-16|松下电器产业株式会社|Image processing method, image storage method, image processing device and image file format| US7924913B2|2005-09-15|2011-04-12|Microsoft Corporation|Non-realtime data transcoding of multimedia content| US20070067480A1|2005-09-19|2007-03-22|Sharp Laboratories Of America, Inc.|Adaptive media playout by server media processing for robust streaming| US9113147B2|2005-09-27|2015-08-18|Qualcomm Incorporated|Scalability techniques based on content information| US20070078876A1|2005-09-30|2007-04-05|Yahoo! Inc.|Generating a stream of media data containing portions of media files using location tags| US7164370B1|2005-10-06|2007-01-16|Analog Devices, Inc.|System and method for decoding data compressed in accordance with dictionary-based compression schemes| EP1935182B1|2005-10-11|2016-11-23|Nokia Technologies Oy|System and method for efficient scalable stream adaptation| CN100442858C|2005-10-11|2008-12-10|华为技术有限公司|Lip synchronous method for multimedia real-time transmission in packet network and apparatus thereof| US7720096B2|2005-10-13|2010-05-18|Microsoft Corporation|RTP payload format for VC-1| EP1946563A2|2005-10-19|2008-07-23|Thomson Licensing|Multi-view video coding using scalable video coding| JP4727401B2|2005-12-02|2011-07-20|日本電信電話株式会社|Wireless multicast transmission system, wireless transmission device, and wireless multicast transmission method| FR2894421B1|2005-12-07|2008-01-18|Canon Kk|METHOD AND DEVICE FOR DECODING A VIDEO STREAM CODE FOLLOWING A HIERARCHICAL CODING| KR100759823B1|2005-12-08|2007-09-18|한국전자통신연구원|Apparatus for generating RZreturn to zero signal and method thereof| JP4456064B2|2005-12-21|2010-04-28|日本電信電話株式会社|Packet transmission device, reception device, system, and program| US20070157267A1|2005-12-30|2007-07-05|Intel Corporation|Techniques to improve time seek operations| KR101353620B1|2006-01-05|2014-01-20|텔레폰악티에볼라겟엘엠에릭슨|Media container file management| US8214516B2|2006-01-06|2012-07-03|Google Inc.|Dynamic media serving infrastructure| BRPI0707457A2|2006-01-11|2011-05-03|Nokia Corp|inverse compatible aggregation of images in resizable video encoding| KR100947234B1|2006-01-12|2010-03-12|엘지전자 주식회사|Method and apparatus for processing multiview video| WO2007086654A1|2006-01-25|2007-08-02|Lg Electronics Inc.|Digital broadcasting system and method of processing data| US7262719B2|2006-01-30|2007-08-28|International Business Machines Corporation|Fast data stream decoding using apriori information| RU2290768C1|2006-01-30|2006-12-27|Общество с ограниченной ответственностью "Трафиклэнд"|Media broadcast system in infrastructure of mobile communications operator| GB0602314D0|2006-02-06|2006-03-15|Ericsson Telefon Ab L M|Transporting packets| US20110087792A2|2006-02-07|2011-04-14|Dot Hill Systems Corporation|Data replication method and apparatus| US8239727B2|2006-02-08|2012-08-07|Thomson Licensing|Decoding of raptor codes| KR101292851B1|2006-02-13|2013-08-02|디지털 파운튼, 인크.|Streaming and buffering using variable fec overhead and protection periods| US9270414B2|2006-02-21|2016-02-23|Digital Fountain, Inc.|Multiple-field based code generator and decoder for communications systems| US20070200949A1|2006-02-21|2007-08-30|Qualcomm Incorporated|Rapid tuning in multimedia applications| JP2007228205A|2006-02-23|2007-09-06|Funai Electric Co Ltd|Network server| US8320450B2|2006-03-29|2012-11-27|Vidyo, Inc.|System and method for transcoding between scalable and non-scalable video codecs| WO2007127741A2|2006-04-24|2007-11-08|Sun Microsystems, Inc.|Media server system| US20080010153A1|2006-04-24|2008-01-10|Pugh-O'connor Archie|Computer network provided digital content under an advertising and revenue sharing basis, such as music provided via the internet with time-shifted advertisements presented by a client resident application| US7640353B2|2006-04-27|2009-12-29|Microsoft Corporation|Guided random seek support for media streaming| US7971129B2|2006-05-10|2011-06-28|Digital Fountain, Inc.|Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems| US7525993B2|2006-05-24|2009-04-28|Newport Media, Inc.|Robust transmission system and method for mobile television applications| TWM302355U|2006-06-09|2006-12-11|Jia-Bau Jeng|Fixation and cushion structure of knee joint| WO2008003094A2|2006-06-29|2008-01-03|Digital Fountain, Inc.|Efficient representation of symbol-based transformations with application to encoding and decoding of forward error correction codes| JP2008011404A|2006-06-30|2008-01-17|Toshiba Corp|Content processing apparatus and method| JP4392004B2|2006-07-03|2009-12-24|インターナショナル・ビジネス・マシーンズ・コーポレーション|Encoding and decoding techniques for packet recovery| EP2302869A3|2006-07-20|2013-05-22|SanDisk Technologies Inc.|An improved audio visual player apparatus and system and method of content distribution using the same| US7711797B1|2006-07-31|2010-05-04|Juniper Networks, Inc.|Optimizing batch size for prefetching data over wide area networks| US8209736B2|2006-08-23|2012-06-26|Mediatek Inc.|Systems and methods for managing television signals| EP2055107B1|2006-08-24|2013-05-15|Nokia Corporation|Hint of tracks relationships for multi-stream media files in multiple description coding MDC.| US20080066136A1|2006-08-24|2008-03-13|International Business Machines Corporation|System and method for detecting topic shift boundaries in multimedia streams using joint audio, visual and text cues| JP2008109637A|2006-09-25|2008-05-08|Toshiba Corp|Motion picture encoding apparatus and method| US8428013B2|2006-10-30|2013-04-23|Lg Electronics Inc.|Method of performing random access in a wireless communcation system| JP2008118221A|2006-10-31|2008-05-22|Toshiba Corp|Decoder and decoding method| WO2008054100A1|2006-11-01|2008-05-08|Electronics And Telecommunications Research Institute|Method and apparatus for decoding metadata used for playing stereoscopic contents| MX2009005086A|2006-11-14|2009-05-27|Qualcomm Inc|Systems and methods for channel switching.| US8027328B2|2006-12-26|2011-09-27|Alcatel Lucent|Header compression in a wireless communication network| WO2008086313A1|2007-01-05|2008-07-17|Divx, Inc.|Video distribution system including progressive playback| US20080168516A1|2007-01-08|2008-07-10|Christopher Lance Flick|Facilitating Random Access In Streaming Content| WO2008084348A1|2007-01-09|2008-07-17|Nokia Corporation|Method for supporting file versioning in mbms file repair| WO2008084876A1|2007-01-11|2008-07-17|Panasonic Corporation|Method for trick playing on streamed and encrypted multimedia| US20080172430A1|2007-01-11|2008-07-17|Andrew Thomas Thorstensen|Fragmentation Compression Management| EP3484123A1|2007-01-12|2019-05-15|University-Industry Cooperation Group Of Kyung Hee University|Packet format of network abstraction layer unit, and algorithm and apparatus for video encoding and decoding using the format| KR20080066408A|2007-01-12|2008-07-16|삼성전자주식회사|Device and method for generating three-dimension image and displaying thereof| US8135071B2|2007-01-16|2012-03-13|Cisco Technology, Inc.|Breakpoint determining for hybrid variable length coding using relationship to neighboring blocks| KR101280477B1|2007-01-24|2013-07-01|퀄컴 인코포레이티드|Ldpc encoding and decoding of packets of variable sizes| US7721003B2|2007-02-02|2010-05-18|International Business Machines Corporation|System and method to synchronize OSGi bundle inventories between an OSGi bundle server and a client| US7805456B2|2007-02-05|2010-09-28|Microsoft Corporation|Query pattern to enable type flow of element types| US20080192818A1|2007-02-09|2008-08-14|Dipietro Donald Vincent|Systems and methods for securing media| US20080232357A1|2007-03-19|2008-09-25|Legend Silicon Corp.|Ls digital fountain code| JP4838191B2|2007-05-08|2011-12-14|シャープ株式会社|File reproduction device, file reproduction method, program for executing file reproduction, and recording medium recording the program| JP2008283571A|2007-05-11|2008-11-20|Ntt Docomo Inc|Content distribution device, system and method| WO2008140261A2|2007-05-14|2008-11-20|Samsung Electronics Co., Ltd.|Broadcasting service transmitting apparatus and method and broadcasting service receiving apparatus and method for effectively accessing broadcasting service| BRPI0811117A2|2007-05-16|2014-12-23|Thomson Licensing|APPARATUS AND METHOD FOR ENCODING AND DECODING SIGNS| FR2917262A1|2007-06-05|2008-12-12|Thomson Licensing Sas|DEVICE AND METHOD FOR CODING VIDEO CONTENT IN THE FORM OF A SCALABLE FLOW.| US8487982B2|2007-06-07|2013-07-16|Reald Inc.|Stereoplexing for film and video applications| EP2501137A3|2007-06-11|2012-12-12|Samsung Electronics Co., Ltd.|Method and apparatus for generating header information of stereoscopic image| US8340113B2|2007-06-20|2012-12-25|Telefonaktiebolaget Lm Erricsson |Method and arrangement for improved media session management| EP2174502A2|2007-06-26|2010-04-14|Nokia Corporation|System and method for indicating temporal layer switching points| US8706907B2|2007-10-19|2014-04-22|Voxer Ip Llc|Telecommunication and multimedia management method and apparatus| US7917702B2|2007-07-10|2011-03-29|Qualcomm Incorporated|Data prefetch throttle| JP2009027598A|2007-07-23|2009-02-05|Hitachi Ltd|Video distribution server and video distribution method| US8683066B2|2007-08-06|2014-03-25|DISH Digital L.L.C.|Apparatus, system, and method for multi-bitrate content streaming| US7839311B2|2007-08-31|2010-11-23|Qualcomm Incorporated|Architecture for multi-stage decoding of a CABAC bitstream| US8327403B1|2007-09-07|2012-12-04|United Video Properties, Inc.|Systems and methods for providing remote program ordering on a user device via a web server| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US8233532B2|2007-09-21|2012-07-31|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Information signal, apparatus and method for encoding an information content, and apparatus and method for error correcting an information signal| US8346959B2|2007-09-28|2013-01-01|Sharp Laboratories Of America, Inc.|Client-controlled adaptive streaming| EP2046044B1|2007-10-01|2017-01-18|Cabot Communications Ltd|A method and apparatus for streaming digital media content and a communication system| CN101822021B|2007-10-09|2013-06-05|三星电子株式会社|Apparatus and method for generating and parsing MAC PDU in mobile communication system| US8635360B2|2007-10-19|2014-01-21|Google Inc.|Media playback point seeking using data range requests| US20090125636A1|2007-11-13|2009-05-14|Qiong Li|Payload allocation methods for scalable multimedia servers| EP2215595B1|2007-11-23|2012-02-22|Media Patents S.L.|A process for the on-line distribution of audiovisual contents with advertisements, advertisement management system, digital rights management system and audiovisual content player provided with said systems| WO2009075766A2|2007-12-05|2009-06-18|Swarmcast, Inc.|Dynamic bit rate scaling| TWI355168B|2007-12-07|2011-12-21|Univ Nat Chiao Tung|Application classification method in network traff| JP5385598B2|2007-12-17|2014-01-08|キヤノン株式会社|Image processing apparatus, image management server apparatus, control method thereof, and program| US9313245B2|2007-12-24|2016-04-12|Qualcomm Incorporated|Adaptive streaming for on demand wireless services| KR101506217B1|2008-01-31|2015-03-26|삼성전자주식회사|Method and appratus for generating stereoscopic image data stream for temporally partial three dimensional data, and method and apparatus for displaying temporally partial three dimensional data of stereoscopic image| EP2086237B1|2008-02-04|2012-06-27|Alcatel Lucent|Method and device for reordering and multiplexing multimedia packets from multimedia streams pertaining to interrelated sessions| US8151174B2|2008-02-13|2012-04-03|Sunrise IP, LLC|Block modulus coding systems and methods for block coding with non-binary modulus| US20090219985A1|2008-02-28|2009-09-03|Vasanth Swaminathan|Systems and Methods for Processing Multiple Projections of Video Data in a Single Video File| US7984097B2|2008-03-18|2011-07-19|Media Patents, S.L.|Methods for transmitting multimedia files and advertisements| US8606996B2|2008-03-31|2013-12-10|Amazon Technologies, Inc.|Cache optimization| US20090257508A1|2008-04-10|2009-10-15|Gaurav Aggarwal|Method and system for enabling video trick modes| CN103795511B|2008-04-14|2018-05-01|亚马逊技术股份有限公司|A kind of method that uplink transmission is received in base station and base station| WO2009127961A1|2008-04-16|2009-10-22|Nokia Corporation|Decoding order recovery in session multiplexing| WO2009130561A1|2008-04-21|2009-10-29|Nokia Corporation|Method and device for video coding and decoding| RU2010150108A|2008-05-07|2012-06-20|Диджитал Фаунтин, Инк. |QUICK CHANNEL CHANGE AND HIGH QUALITY STREAM PROTECTION ON A BROADCAST CHANNEL| US7979570B2|2008-05-12|2011-07-12|Swarmcast, Inc.|Live media delivery over a packet-based computer network| JP5022301B2|2008-05-19|2012-09-12|株式会社エヌ・ティ・ティ・ドコモ|Proxy server, communication relay program, and communication relay method| CN101287107B|2008-05-29|2010-10-13|腾讯科技(深圳)有限公司|Demand method, system and device of media file| US7860996B2|2008-05-30|2010-12-28|Microsoft Corporation|Media streaming with seamless ad insertion| US20100011274A1|2008-06-12|2010-01-14|Qualcomm Incorporated|Hypothetical fec decoder and signalling for decoding control| US8775566B2|2008-06-21|2014-07-08|Microsoft Corporation|File format for media distribution and presentation| US8387150B2|2008-06-27|2013-02-26|Microsoft Corporation|Segmented media content rights management| US8468426B2|2008-07-02|2013-06-18|Apple Inc.|Multimedia-aware quality-of-service and error correction provisioning| US8539092B2|2008-07-09|2013-09-17|Apple Inc.|Video streaming using multiple channels| US20100153578A1|2008-07-16|2010-06-17|Nokia Corporation|Method and Apparatus for Peer to Peer Streaming| US8638796B2|2008-08-22|2014-01-28|Cisco Technology, Inc.|Re-ordering segments of a large number of segmented service flows| KR101019634B1|2008-09-04|2011-03-07|에스케이 텔레콤주식회사|Media streaming system and method| US8325796B2|2008-09-11|2012-12-04|Google Inc.|System and method for video coding using adaptive segmentation| US8370520B2|2008-11-24|2013-02-05|Juniper Networks, Inc.|Adaptive network content delivery system| US8099476B2|2008-12-31|2012-01-17|Apple Inc.|Updatable real-time or near real-time streaming| US8743906B2|2009-01-23|2014-06-03|Akamai Technologies, Inc.|Scalable seamless digital video stream splicing| CN102365869B|2009-01-26|2015-04-29|汤姆森特许公司|Frame packing for video coding| EP2392144A1|2009-01-29|2011-12-07|Dolby Laboratories Licensing Corporation|Methods and devices for sub-sampling and interleaving multiple images, eg stereoscopic| US20100211690A1|2009-02-13|2010-08-19|Digital Fountain, Inc.|Block partitioning for a data stream| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| US8909806B2|2009-03-16|2014-12-09|Microsoft Corporation|Delivering cacheable streaming media presentations| US8621044B2|2009-03-16|2013-12-31|Microsoft Corporation|Smooth, stateless client media streaming| WO2010120804A1|2009-04-13|2010-10-21|Reald Inc.|Encoding, decoding, and distributing enhanced resolution stereoscopic video| US9807468B2|2009-06-16|2017-10-31|Microsoft Technology Licensing, Llc|Byte range caching| US8903895B2|2009-07-22|2014-12-02|Xinlab, Inc.|Method of streaming media to heterogeneous client devices| US8355433B2|2009-08-18|2013-01-15|Netflix, Inc.|Encoding video streams for adaptive video streaming| CN102835150B|2009-09-02|2015-07-15|苹果公司|MAC packet data unit construction for wireless systems| US9917874B2|2009-09-22|2018-03-13|Qualcomm Incorporated|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling| US20110096828A1|2009-09-22|2011-04-28|Qualcomm Incorporated|Enhanced block-request streaming using scalable encoding| US9438861B2|2009-10-06|2016-09-06|Microsoft Technology Licensing, Llc|Integrating continuous and sparse streaming data| JP2011087103A|2009-10-15|2011-04-28|Sony Corp|Provision of content reproduction system, content reproduction device, program, content reproduction method, and content server| WO2011057012A1|2009-11-04|2011-05-12|Huawei Technologies Co., Ltd|System and method for media content streaming| KR101786050B1|2009-11-13|2017-10-16|삼성전자 주식회사|Method and apparatus for transmitting and receiving of data| KR101786051B1|2009-11-13|2017-10-16|삼성전자 주식회사|Method and apparatus for data providing and receiving| CN101729857A|2009-11-24|2010-06-09|中兴通讯股份有限公司|Method for accessing video service and video playing system| WO2011070552A1|2009-12-11|2011-06-16|Nokia Corporation|Apparatus and methods for describing and timing representations in streaming media files| AU2011218489B2|2010-02-19|2015-08-13|Telefonaktiebolaget L M Ericsson |Method and arrangement for adaption in HTTP streaming| EP2537318A4|2010-02-19|2013-08-14|Ericsson Telefon Ab L M|Method and arrangement for representation switching in http streaming| JP5071495B2|2010-03-04|2012-11-14|ウシオ電機株式会社|Light source device| EP3783822A1|2010-03-11|2021-02-24|Electronics and Telecommunications Research Institute|Method and apparatus for transceiving data in a mimo system| US9225961B2|2010-05-13|2015-12-29|Qualcomm Incorporated|Frame packing for asymmetric stereo video| US9497290B2|2010-06-14|2016-11-15|Blackberry Limited|Media presentation description delta file for HTTP streaming| US8918533B2|2010-07-13|2014-12-23|Qualcomm Incorporated|Video switching for streaming video data| US9185439B2|2010-07-15|2015-11-10|Qualcomm Incorporated|Signaling data for multiplexing video components| KR20120010089A|2010-07-20|2012-02-02|삼성전자주식회사|Method and apparatus for improving quality of multimedia streaming service based on hypertext transfer protocol| US9131033B2|2010-07-20|2015-09-08|Qualcomm Incoporated|Providing sequence data sets for streaming video data| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US8711933B2|2010-08-09|2014-04-29|Sony Computer Entertainment Inc.|Random access point formation using intra refreshing technique in video coding| US9456015B2|2010-08-10|2016-09-27|Qualcomm Incorporated|Representation groups for network streaming of coded multimedia data| KR101737325B1|2010-08-19|2017-05-22|삼성전자주식회사|Method and apparatus for reducing decreasing of qualitly of experience in a multimedia system| US8615023B2|2010-10-27|2013-12-24|Electronics And Telecommunications Research Institute|Apparatus and method for transmitting/receiving data in communication system| US20120151302A1|2010-12-10|2012-06-14|Qualcomm Incorporated|Broadcast multimedia storage and access using page maps when asymmetric memory is used| US9270299B2|2011-02-11|2016-02-23|Qualcomm Incorporated|Encoding and decoding using elastic codes with flexible source block mapping| US20120208580A1|2011-02-11|2012-08-16|Qualcomm Incorporated|Forward error correction scheduling for an improved radio link protocol| US8958375B2|2011-02-11|2015-02-17|Qualcomm Incorporated|Framing for an improved radio link protocol including FEC| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US9843844B2|2011-10-05|2017-12-12|Qualcomm Incorporated|Network streaming of media data| US9294226B2|2012-03-26|2016-03-22|Qualcomm Incorporated|Universal object delivery and template-based file delivery|US6307487B1|1998-09-23|2001-10-23|Digital Fountain, Inc.|Information additive code generator and decoder for communication systems| US7068729B2|2001-12-21|2006-06-27|Digital Fountain, Inc.|Multi-stage code generator and decoder for communication systems| US9015564B2|2009-08-19|2015-04-21|Qualcomm Incorporated|Content delivery system with allocation of source data and repair data among HTTP servers| US9240810B2|2002-06-11|2016-01-19|Digital Fountain, Inc.|Systems and processes for decoding chain reaction codes through inactivation| US9288010B2|2009-08-19|2016-03-15|Qualcomm Incorporated|Universal file delivery methods for providing unequal error protection and bundled file delivery services| US9419749B2|2009-08-19|2016-08-16|Qualcomm Incorporated|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| AU2003277198A1|2002-10-05|2004-05-04|Digital Fountain, Inc.|Systematic encoding and decoding of chain reaction codes| US20050058313A1|2003-09-11|2005-03-17|Victorian Thomas A.|External ear canal voice detection| EP2722995A3|2003-10-06|2018-01-17|Digital Fountain, Inc.|Soft-decision decoding of multi-stage chain reaction codes| EP1743431A4|2004-05-07|2007-05-02|Digital Fountain Inc|File download and streaming system| US9209934B2|2006-06-09|2015-12-08|Qualcomm Incorporated|Enhanced block-request streaming using cooperative parallel HTTP and forward error correction| US9178535B2|2006-06-09|2015-11-03|Digital Fountain, Inc.|Dynamic stream interleaving and sub-stream based delivery| US9432433B2|2006-06-09|2016-08-30|Qualcomm Incorporated|Enhanced block-request streaming system using signaling or block creation| US9380096B2|2006-06-09|2016-06-28|Qualcomm Incorporated|Enhanced block-request streaming system for handling low-latency streaming| US9386064B2|2006-06-09|2016-07-05|Qualcomm Incorporated|Enhanced block-request streaming using URL templates and construction rules| KR101292851B1|2006-02-13|2013-08-02|디지털 파운튼, 인크.|Streaming and buffering using variable fec overhead and protection periods| US9270414B2|2006-02-21|2016-02-23|Digital Fountain, Inc.|Multiple-field based code generator and decoder for communications systems| US7971129B2|2006-05-10|2011-06-28|Digital Fountain, Inc.|Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems| US9237101B2|2007-09-12|2016-01-12|Digital Fountain, Inc.|Generating and communicating source identification information to enable reliable communications| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| US9917874B2|2009-09-22|2018-03-13|Qualcomm Incorporated|Enhanced block-request streaming using block partitioning or request controls for improved client-side handling| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US8958375B2|2011-02-11|2015-02-17|Qualcomm Incorporated|Framing for an improved radio link protocol including FEC| US9270299B2|2011-02-11|2016-02-23|Qualcomm Incorporated|Encoding and decoding using elastic codes with flexible source block mapping| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US9843844B2|2011-10-05|2017-12-12|Qualcomm Incorporated|Network streaming of media data| US9294226B2|2012-03-26|2016-03-22|Qualcomm Incorporated|Universal object delivery and template-based file delivery| KR101983032B1|2012-05-07|2019-05-30|삼성전자주식회사|Apparatus and method for transmitting and receiving packet in broadcasting and communication system| US9160399B2|2012-05-24|2015-10-13|Massachusetts Institute Of Technology|System and apparatus for decoding tree-based messages| US10666701B2|2012-11-16|2020-05-26|Citrix Systems, Inc.|Adaptation of content delivery network to incremental delivery of large, frequently updated data sets| US10360139B2|2013-03-12|2019-07-23|Entit Software Llc|Identifying transport-level encoded payloads| WO2014191705A1|2013-05-29|2014-12-04|Toshiba Research Europe Limited|Coding and decoding methods and apparatus| US9369920B2|2013-06-12|2016-06-14|Qualcomm Incorporated|Degree reduction and degree-constrained combining for relaying a fountain code| US9270412B2|2013-06-26|2016-02-23|Massachusetts Institute Of Technology|Permute codes, iterative ensembles, graphical hash codes, and puncturing optimization| US9196299B2|2013-08-23|2015-11-24|Avago Technologies General IpPte. Ltd.|Systems and methods for enhanced data encoding and decoding| EP2858286A1|2013-10-04|2015-04-08|Alcatel Lucent|Rateless decoding| TWI523465B|2013-12-24|2016-02-21|財團法人工業技術研究院|System and method for transmitting files| FR3018148B1|2014-02-28|2017-08-25|Allegro Dvt|VIDEO STREAM GENERATOR| US9496897B1|2014-03-31|2016-11-15|EMC IP Holding Company LLC|Methods and apparatus for generating authenticated error correcting codes| KR102004274B1|2014-06-10|2019-10-01|엘지전자 주식회사|Broadcast signal transmitting apparatus, broadcast signal receiving apparatus, broadcast signal transmitting method, and broadcast signal receiving method| US10367605B2|2015-07-02|2019-07-30|Intel Corporation|High speed interconnect symbol stream forward error-correction| US10003357B2|2015-08-28|2018-06-19|Qualcomm Incorporated|Systems and methods for verification of code resiliency for data storage| EP3142280A1|2015-09-09|2017-03-15|Alcatel Lucent|Method, system and computer readable medium for the transmission of symbols between at least two telecommunication apparatus| KR20170112561A|2016-03-31|2017-10-12|삼성전자주식회사|Method and Device for providing different services| TWI602409B|2016-08-15|2017-10-11|國立交通大學|Method and system for data transmission| US10320428B2|2016-08-15|2019-06-11|Qualcomm Incorporated|Outputting of codeword bits for transmission prior to loading all input bits| JP6885025B2|2016-11-18|2021-06-09|ソニーグループ株式会社|Transmission device and transmission method| KR101886937B1|2016-12-30|2018-08-08|성균관대학교산학협력단|Packet transmission method of relay node through network coding, relay apparatus transmitting packet through network coding, packet reception method of destination node and apparatus receiving network coded packet| CN107332570B|2017-06-06|2020-12-04|北京理工大学|Polarization code coding method of segmented cascade Hash sequence| CA3036163A1|2018-03-13|2019-09-13|Queen's Unviversity At Kingston|Fault-tolerant distributed digital storage| KR102034390B1|2018-08-23|2019-10-18|최운영|System for tracing log of data based on information transferred to display| CN110297703B|2019-06-11|2021-11-05|国网江苏省电力有限公司|Method and device for simulating hardware task scheduling in real-time simulation system| US11055018B2|2019-06-25|2021-07-06|Western Digital Technologies, Inc.|Parallel storage node processing of data functions| US10990324B2|2019-06-25|2021-04-27|Western Digital Technologies, Inc.|Storage node processing of predefined data functions| CN113541857A|2020-04-17|2021-10-22|华为技术有限公司|Coding method and communication device| US11115050B1|2020-08-24|2021-09-07|Innogrit Technologies Co., Ltd.|Hardware friendly data decompression| US11115049B1|2020-08-24|2021-09-07|Innogrit Technologies Co., Ltd.|Hardware friendly data decompression| WO2022041187A1|2020-08-31|2022-03-03|Qualcomm Incorporated|Degree selection schemes for rapid tornadocodes in multicast and broadcast services and in unicast services|
法律状态:
2020-08-18| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-01-05| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-03-23| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 23/03/2021, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US23528509P| true| 2009-08-19|2009-08-19| US61/235,285|2009-08-19| US12/604,773|2009-10-23| US12/604,773|US7956772B2|2002-06-11|2009-10-23|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| US25714609P| true| 2009-11-02|2009-11-02| US61/257,146|2009-11-02| US35391010P| true| 2010-06-11|2010-06-11| US61/353,910|2010-06-11| US12/859,161|2010-08-18| US12/859,161|US9419749B2|2009-08-19|2010-08-18|Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes| PCT/US2010/046027|WO2011022555A2|2009-08-19|2010-08-19|Methods and apparatus employing fec codes with permanent inactivation of symbols for encoding and decoding processes| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|